Sen. Richard Blumenthal (D-Conn.) slammed Facebook and Twitter on Monday over their handling of anti-vaccination content specifically targeting pregnant women after the platforms responded to the senator’s letter pushing them to clamp down on the misinformation campaigns.
“Facebook and Twitter’s playbook is out-of-date, worn-out, and woefully inadequate toward addressing the horrifying abuse and disinformation that continues to spread like wildfire on their platforms,” Blumenthal said in a statement.
The senator wrote to the social media giants last month calling for them to follow through on commitments to remove coronavirus vaccine misinformation in light of reported incidents of anti-vaccine campaigns targeted and harassing pregnant women with false information.
Blumenthal said the platforms’ “vague content moderation policies, ineffective fact checking, inconsistent enforcement, and meaningless labels are cold comfort to the women continuously assailed by vile anti-vaccine hate and life-threatening falsehoods.”
Blumenthal’s letter cited a Daily Beast report about online anti-vaccine misinformation and the related harassment of pregnant women. In one case, users told a woman who shared she got the coronavirus vaccine when she was 14 weeks pregnant and later miscarried that she “got what she deserved,” according to the report.
There is no evidence that the COVID-19 vaccine causes miscarriage, and the nation’s top infectious diseases expert, Anthony Fauci, last month said about 20,000 pregnant woman had been vaccinated against the virus with “no red flags.”
In response to the senator, the platforms touted their policies and efforts to combat coronavirus vaccine misinformation.
Twitter’s head of U.S. public policy, Lauren Culbertson, noted the platform removed more than 8,483 tweets and challenged at least 11.4 million accounts “that were targeting discussions around COVID-19 with potentially manipulative behaviors,” according to a copy of the company’s response to Blumenthal.
Regarding abusive behavior, Culbertson said users can report harassment and the platform aims to “preemptively take action on violative content before it is seen.”
But Culbertson’s response did not specifically address Blumenthal’s question as to why identified accounts in reports “preying on women who have been vaccinated” were not removed.
Facebook similarly highlighted its policies around removing coronavirus and coronavirus vaccine misinformation.
As for why accounts were not removed, Facebook’s vice president of U.S. public policy, Kevin Martin, told Blumenthal enforcement of its standards “relies on information available to us.”
“In some cases, this means that we may not detect content and behavior that violates these standards, and in others, enforcement may be limited to circumstances where we have been provided with additional information and context,” Martin wrote, according to a copy of the response.
Blumenthal said he sees “little in their responses that demonstrates these profitable and powerful companies are going to stop treating victims of this abuse like an afterthought.”
Facebook on Monday announced expanded efforts to combat coronavirus misinformation, including plans to launch labels for all posts that discuss COVID-19 vaccines.
In Facebook’s Monday post, it said 2 million pieces of content from Facebook and Instagram have been removed since expanding its policy to remove all debunked claims about the coronavirus and vaccines in February.
Nonetheless, the platforms have both faced intense scrutiny from officials over the handling of false claims especially as the U.S. ramps up the vaccine rollout amid President Biden’s pledge that all adults over 18 will be eligible for the vaccine by May 1.
Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey and Google CEO Sundar Pichai will likely face questions from lawmakers next week on the handling of such false claims when they testify during the House Energy and Commerce Committee hearing on the spread of online misinformation.