The views expressed by contributors are their own and not the view of The Hill

A call to arms for tech companies: Get extremists off your platforms


Fort Hood shooter Nidal Hasan. Boston Marathon bombers Dzhokar and Tamerlan Tsarnaev. Underwear bomber Umar Farouk Abdulmutallab. Garland, Texas, gunman Elton Simpson. All of these terrorists were deeply influenced by the American-born al-Qaeda operative and propagandist Anwar al-Awlaki, who continued to radicalize and encourage indiscriminate killing long after his 2011 death, thanks to the easy accessibility of his sermons and lectures on YouTube and other Internet platforms.

{mosads}Last November, Google took the exemplary step of removing most Awlaki content from its video-sharing platform YouTube. While that action was commendable, much more work is waiting to be done. Sadly, Awlaki was just one in a sea of similar, radical voices. The tech giant must set an example for the rest of the industry by blocking not just Awlaki but other, noxious extremists from using all of its platforms and encouraging other tech companies to do the same.

The new Guide to Online Propagandists from my organization, the Counter Extremism Project, highlights more than a dozen radical propagandists whose hateful rhetoric remains freely accessible across social media platforms. These include:

Some of this material now carries YouTube’s offensive content warning label, but keeping this material online and one click away from vulnerable viewers will not deter anyone seeking it. Google’s Awlaki decision created a clear precedent for removing content tied to entities and individuals sanctioned by the U.S., EU, and U.N. as well as individuals with demonstrable links to violence. Using Awlaki and YouTube as an example, all tech firms must deny these virulent propagandists a platform. A consistent policy for removing dangerous, hate-filled content demands they be removed.

The tech companies have already given themselves the necessary tools to do so. In 2016, Facebook, Twitter, Microsoft, and YouTube created the Shared Industry Hash Database to collect digital “fingerprints” of extremist content that had been removed from platforms. In June 2017, these same tech companies launched the Global Internet Forum to Counter Terrorism in order to share best practices and lessons learned about countering the threat of terrorist content online.

Tech companies must remove the confusion regarding what is and is not acceptable content. In a singular voice, these companies must reject content promoting groups or individuals on international sanctions lists or those who spew hateful and/or violent rhetoric toward ethnic or religious groups. Social media companies whose terms of service agreements do not already ban extremist content should immediately ensure the necessary changes.

To shape these policies, the tech companies should look to the State Department’s Foreign Terrorist Organizations list, the Treasury Department’s Specially Designated Nationals and Blocked Persons list, and the U.N. Security Council Sanctions list. Tech firms should refuse to host content produced by groups and individuals on U.K., EU, U.S., and U.N. sanctions list and by individuals with clear links to violence, as well as use their Shared Industry Hash Database and the Global Internet Forum to Counter Terrorism to better target and consistently remove these voices from all digital platforms.

Google made a fundamental decision to remove Awlaki from YouTube, but many more like him remain online. The tech companies must now categorically and universally act to ensure those with similar messages of hate and murder can no longer abuse their platforms.

David Ibsen is the executive director of the Counter Extremism Project.