The views expressed by contributors are their own and not the view of The Hill

The scale of abuse against women on Twitter is shocking


Despite years of public pressure, Twitter refuses to take meaningful action to protect women from online abuse. This week Amnesty International released the largest ever study into online abuse against women on Twitter. With the help of thousands of volunteers, we were able to review hundreds of thousands of tweets sent to women journalists and politicians in the U.S. and U.K. — an analysis on an unprecedented scale.

It’s well-known that Twitter has a big problem with abusive content, but the company’s reluctance to publish data means nobody knows quite how big. Anecdotally, Twitter is notorious as a place where the vilest strains of racism, homophobia and misogyny are tolerated.

{mosads}During Amnesty’s two years of research into the issue countless women, especially those in the public eye, have told us that threats of rape and of death are part of the package for them using Twitter. Twitter has a decent policy on hateful conduct, but a quick scroll through the feed of any prominent woman politician shows there are still massive gaps in how this policy is enforced.

We wanted to collect the data to prove to Twitter just how bad this problem is. So along with artificial intelligence software product company Element AI, we devised a unique crowdsourcing project that would help us understand the scale and nature of the abuse on Twitter.

We enlisted more than 6,000 digital volunteers to take part in a “Troll Patrol” and analyze more than 288,000 tweets. After a crash course in our definitions, volunteers labelled tweets as “abusive”, “problematic” or neither and identified the type of abuse – whether, for example, it was racist, sexist or homophobic.  

We found that 12 percent of the tweets mentioning the 778 women in our sample were either “abusive” or “problematic”. Extrapolating from our findings and using cutting-edge data science, Element AI calculated that this amounts to 1.1 million tweets over the course of the year. This is horrifying, but sadly not much of a surprise. In fact, many of our findings corroborate what women have been telling us for years about their experiences on Twitter.

For example, one of the most striking statistics was the fact that black women in the sample were 84 percent  more likely than white women to be mentioned in abusive tweets.

For our Toxic Twitter report last year we interviewed many black politicians, activists and journalists who described the horrendous and relentless racist abuse they receive on the platform, including references to lynching, being hanged and animal names. Almost every single woman who said they experienced intersecting discrimination offline stressed that this was reflected in their Twitter experience.

One of the things we wanted to emphasize from our findings is that human judgment is incredibly important in content moderation. Large social media platforms are increasingly turning to automated systems to help manage abuse on their platforms — in a letter responding to our study, Twitter called machine learning “one of the areas of greatest potential for tackling abusive users.” Governments are pushing this as a solution too — for example the EU Commission  has proposed a regulation on “dissemination of terrorist content online” which encourages the use of automated tools.

To explore these developments further, Element AI helped us develop a state of the art machine learning model that attempts to automate the process of detecting abuse. It’s not perfect but it’s an improvement on any other existing model and the errors it makes are illuminating when it comes to understanding the limits of automation in content moderation.

For example, while our volunteers recorded that this Tweet: “Be a good girl… go wash dishes” is clearly problematic and sexist, the model predicts this has only a 10 percent likelihood of being problematic or abusive. Likewise, if you type the words “Go home” into the model, it concludes a 0 percent chance of abusive or problematic content, as it is not attuned to the way these words can be used in a racist context. (For comparison, “Die bitch” returns 92 percent).

The model also flags as abusive some content which is not abusive, which raises censorship concerns. YouTube’s use of  a machine-learning algorithm to detect “extremist content” has reportedly resulted in the accidental removal of hundreds of thousands of videos which are actually  documenting human rights abuses in Syria.  

Automation, therefore, should be part of a larger content moderation system characterized by human judgement, greater transparency, rights of appeal and other safeguards.

To be clear: it is not our job, as a human rights organization, to analyze abusive tweets. But we have been asking Twitter for years to publish information about abuse on its platform and it has repeatedly refused to do so.

Without the data about the nature and scale of abuse, it is impossible to design effective solutions. We wanted to show that it is possible, with a fraction of the resources and information that Twitter has at its disposal, to collect meaningful data about abuse on Twitter. Understanding this problem is the first step to designing effective solutions.

Tanya O’Carroll is the director of Amnesty Tech.