Meta cracks down on violence, misinformation amid Israel-Hamas war
Meta, the parent company of Facebook and Instagram, said Friday it stepping up efforts to enforce policies on violence and misinformation amid the ongoing conflict between Israel and the Palestinian militant group Hamas.
The tech giant said it has established a “special operations center” with experts, including fluent Hebrew and Arabic speakers, to monitor the situation and remove content that violates Meta policies more quickly.
In the first three days of the conflict, Meta said it removed or flagged more than 795,000 pieces of content in Hebrew and Arabic for violating its policies on dangerous organizations and individuals, violent and graphic content, and hate speech, among others.
The company emphasized that Hamas is banned from Facebook and Instagram under its dangerous organizations and individuals policy.
“We want to reiterate that our policies are designed to give everyone a voice while keeping people safe on our apps,” Meta said. “We apply these policies regardless of who is posting or their personal beliefs, and it is never our intention to suppress a particular community or point of view.”
Amid a deluge of misinformation related to the conflict on social media, Meta also said it is working with AFP, Reuters and Fatabyyano to fact-check claims and move false claims lower in users’ feeds.
The release comes just days after Meta CEO Mark Zuckerberg received a letter from the European Union, urging his company to be “very vigilant” about removing “illegal content” and disinformation.
Thierry Breton, EU commissioner for Internal Market, emphasized Meta’s duty to take “timely, diligent and objective action” after being notified of illegal content on its platforms under the bloc’s new online regulations known as the Digital Services Act.
X owner Elon Musk received a more sternly worded warning from Breton on Tuesday about the spread of “illegal content” and disinformation on the platform formerly known as Twitter.
The EU announced Thursday it would investigate X over its handling of “terrorist and violent content and hate speech” related to the conflict in Israel and Gaza.
Since the outbreak of the conflict, false claims have flooded X, including posts claiming old and unrelated photos and videos — and even a video game clip — are from the current Israel-Hamas war.
Experts have warned that while viral misinformation often spreads during conflicts, Musk’s changes to the platform since he bought the social media company last year have exacerbated the issue.
Musk has rolled back content moderation measures, reinstated banned accounts and eliminated the platform’s legacy verification system in favor of a paid subscription service since acquiring Twitter for $44 billion last October.
X CEO Linda Yaccarino responded to Breton’s letter Thursday before the EU announced its investigation, noting that the platform has removed hundreds of accounts linked to Hamas and removed or labeled tens of thousands of pieces of content.
She also said the platform “redistributed resources and refocused internal teams” and is “proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis.”
“There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts,” she added.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..