Facebook outlined some of the ways it is trying to stamp out terrorist content on its platform Thursday, following calls from U.K. Prime Minister Theresa May to crack down on online terrorist havens.
In a blog post, Facebook’s director of global policy management, Monika Bickert, and counterterrorism policy manager Brian Fishman gave a rare behind-the-scenes look at how the social media giant searches for and removes terrorist content.
“Our stance is simple: There’s no place on Facebook for terrorism,” Bickert and Fishman wrote. “We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny.”
{mosads}According to the two company officials, Facebook uses artificial intelligence to find images and text posted on the site that advocates for terrorism. They also employ technology to crack down on groups and pages that promote terrorism.
Facebook also uses these tools on its other platforms, such as Instagram and WhatsApp, though it stressed that it does not have the ability to read encrypted messages.
The company has more than 150 people dedicated to countering the spread of terrorism content.
After a terrorist attack at London Bridge earlier this month, May, the British prime minister, accused technology companies of fostering a breeding ground for terrorists and called for tough internet regulation.
“We cannot allow this ideology the safe space it needs to breed,” she said. “Yet that is precisely what the internet and big companies that provide internet-based services provide. We need to do everything we can at home to reduce the risks of extremism online.”
Facebook’s article appears to be a direct response to that accusation. It’s also the first in a series of posts the company will write addressing the role it plays in society and controversies about how it operates.