Big Tech must step up now to fight misinformation in the midterms
We’re just four months away from the 2022 midterm elections, and more than 100 candidates nationwide have embraced Trump’s “Big Lie.” These candidates not only claim the 2020 race was rigged, but also cast doubt on the legitimacy of the upcoming November elections.
In 2020, election fraud allegations spread widely on social media. President Trump regularly tweeted election lies, and groups used Facebook to coordinate the Jan. 6 insurrection. So far, however, reports indicate social media companies may be unprepared for the coming onslaught of election misinformation.
As Facebook pivots to focus on the metaverse, for example, the company reduced the number of employees focused on election integrity from 300 to 60. Experts fear this lack of resources and attention, combined with the magnitude of the midterms, could exacerbate the problem. Indeed, internal research shows Facebook struggles to catch misinformation in local environments, like those seen in the midterms.
Instead of pulling back election integrity measures, platforms should enhance their election safeguards. As researchers who study the intersection of social media, politics and democracy, here are four questions we’re watching.
How will social media respond to threats to democratic legitimacy?
Right now, a faction of the Republican party has decided that election outcomes — at least when they lose — aren’t legitimate. As a result, platforms must not only consider how to moderate election misinformation but also how to handle candidates who question the legitimacy of the process itself.
Platforms have numerous ways to moderate misinformation. Research shows they all work, and don’t, to varying degrees. For example, several studies indicate fact checks can reduce belief in misperceptions. However, these effects can decay over time. Another study found attaching a warning label or blocking engagement with Trump’s 2020 election misinformation tweets was not related to a reduction of its spread, both on Twitter and other platforms. And while recent work shows accuracy nudges decrease belief in and sharing of misinformation, it’s yet to be tested at scale on platforms.
Beyond content, platforms also must contend with users spreading election falsehoods, many of whom are political candidates. With the exception of Trump, companies have been largely reluctant to ban candidates who post misinformation. Indeed, high-profile users, such as celebrities and politicians, are essentially immune from Facebook’s content moderation rules.
There’s no silver bullet to stop misinformation on social media. Instead, platforms must work together to employ a variety of tools to slow its spread, fairly and equitably punish users who repeatedly violate the rules and maintain trust by supporting open democratic discourse. The European Union’s new anti-disinformation code, which several platforms voluntarily signed onto in June, is an encouraging start.
How will companies stop extremists from organizing on their platforms?
Social media doesn’t have a monopoly on the dissemination of anti-democratic content. In fact, Harvard’s Berkman Klein Center found election 2020 disinformation surrounding mail-in voting was an “elite-driven, mass-media led process.” However, social sites remain a primary place where groups — both pro-social and anti-democratic — can coordinate and mobilize. Classifying and moderating impermissible content is hard; curtailing the ability for groups to mobilize is even harder, as content in small, closed groups can cause outsized harm.
Thus far, there have been some notable failures. Before Jan. 6, Facebook banned the primary “Stop the Steal” group for language spreading hate and inciting violence. However, they didn’t stop similar groups, which experienced “meteoric growth.” Overall, a 2021 analysis found 267 pages and groups, many tied to QAnon and militia organizations, “with a combined following of 32 million, spreading violence-glorifying content in the heat of the 2020 election.”
These groups on Facebook — and other platforms — were instrumental in coordinating Jan. 6. With so many candidates still talking about rigged elections, we could see more violence after the upcoming midterms. Social platforms should do everything they can to disrupt these groups and make it harder for extremists to organize violence.
What about video?
For years, social media were largely text and image-based platforms. Now, video is dominant. TikTok, with more than 1 billion monthly active users, is one of the most popular social networks. YouTube, the second most visited website after Google, continues to be under-researched. And even Facebook — once a place designed for connecting with family and friends — is shifting its focus to short-form video.
Platforms have struggled to create artificial intelligence systems to moderate text-based content at scale. How will they deal with multi-modal misinformation — shared as images, video and audio? Reports suggest misinformation is rampant on TikTok, specifically around COVID-19 vaccines and the Russian invasion of Ukraine. YouTube has done a better job of tweaking its algorithm to exclude potentially harmful videos. But as the race heats up, this is a critical area on which to focus.
Will platforms share their data?
Although we’ve come a long way in our understanding of these networks, it’s hard to truly know what’s happening without access to more social media data. Access currently varies widely by platform.
Facebook’s CrowdTangle tool helps us examine content engagement, but researchers worry it could be decommissioned at any time. Twitter has been an industry leader in data access, but Elon Musk’s pending purchase puts that access in doubt. Meanwhile, TikTok and YouTube share very limited data and are largely closed off to journalists and researchers.
There are currently several proposals in Congress that would secure researcher access to data, and the EU just passed landmark rules regulating Big Tech. Although it’s too late for these bills to make data accessible around this election cycle, these are promising developments for the future.
To be sure, social media is not solely to blame for the current state of our democracy. Larger societal forces, including a fragmented media environment, geographic sorting by partisanship and partisan gerrymandering have helped drive polarization over the last several decades. But social media can often act as an accelerant, exacerbating our institutional shortcomings.
Looking ahead to the midterms, we hope social media executives are worried about the threats facing our democracy — and that they have or will develop comprehensive plans to help safeguard the electoral process.
Zeve Sanderson is the executive director of NYU’s Center for Social Media and Politics (CSMaP). Joshua A. Tucker is one of the co-founders and co-directors of NYU’s Center for Social Media and Politics (CSMaP). He is a professor of Politics, an affiliated professor of Russian and Slavic Studies, and an affiliated professor of Data Science at New York University, as well as the director of NYU’s Jordan Center for Advanced Study of Russia. He is the co-editor of the edited volume “Social Media and Democracy: The State of the Field,” and the co-chair of the independent academic research team on the 2020 US Facebook and Instagram Election Research Study.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..