In 2016, RAND Corporation researchers described Russia’s attempts at influencing the American public as a “firehose of falsehoods,” characterized by a high volume of content distributed broadly, repetitively and with little commitment to truth and consistency.
Our knowledge of these tactics was partly built on a degree of social media transparency that no longer exists in 2024.
The social media site X ceased offering free access to researchers in 2023, now charging them $42,000 a month for the access necessary for large-scale research. Reddit also ended free access for large-scale research in 2023.
Meta just replaced CrowdTangle, its platform allowing journalists and researchers to monitor trends on Facebook and Instagram, with a replacement that is reportedly less transparent and less accessible.
And then there’s TikTok, generally considered by researchers to be among the most opaque and difficult to work with of major social media platforms. TikTok has not necessarily grown worse in this regard, but its opaqueness has grown more consequential as its popularity has expanded.
These recent declines in transparency may blind researchers and the public to the effects of other changes in the social media ecosystem that are altering how disinformation is being spread. These include the continuing rise of influencer marketing, the growing capabilities of artificial intelligence and declining content moderation by social media platforms, itself a consequence of such moderation becoming politically fraught.
The increasingly controversial nature of confronting misinformation and disinformation reportedly culminated in leadership and staff departures at the Stanford Internet Observatory, an organization studying how social media platforms are abused. When Renée DiResta, the observatory’s former director, explained how opinion had shifted against the organization, she highlighted the important role of partisan social media influencers.
Recent BBC reporting also points to the central role of influencers in spreading Russian disinformation targeting U.S. voters, as do recent allegations from the Department of Justice that Russia provided financial support for several well-known conservative influencers.
These instances reflect the growing importance of the influencer economy more broadly and reporting by the BBC and allegations by the Justice Department help substantiate concern that some influencers are becoming vectors of malign foreign influence.
News stories such as these tend to focus on influencers with large followings, yet research indicates that it’s influencers with comparatively small followings who appear to be the most persuasive. Political organizations also approach these micro-influencers because they are relatively cheap to recruit and offer access to key demographic groups.
If these smaller-scale influencers are also being recruited, wittingly or unwittingly, to spread disinformation to U.S. audiences, researchers would have little ability to spot it. Meta’s new tools, for example, only allow for an examination of a subset of public users with at least 25,000 followers or who go through the process of receiving a verified badge, leaving many of the most prized influencers beyond large-scale examination by the research community.
The conversations around disinformation in 2024 have been dominated by AI, especially its ability to enhance the quality and quantity of disinformation encountered online. But I have so far focused instead on influencers because they offer a more direct solution to a key challenge faced by those seeking to target the U.S. public with disinformation: content distribution.
Compared to digital advertising, the influencer marketing space is largely unregulated, unmonitored and potentially as alluring an option for distributing targeted disinformation today as online advertisements were in 2016. At the same time, the research community is in a much worse position to assess these evolving tactics relative to where they were during the 2020 elections, or even the 2022 midterms.
What can be done today to confront disinformation in an environment where content moderation is increasingly viewed as overly partisan? The clearest answer may be “prebunking.”
Prebunking can include presenting a disinformation narrative along with counter-evidence debunking the claim or, less controversially, it can focus on explaining the tactics used to make disinformation persuasive. One such tactic includes the use of fictitious American-sounding “news” websites to host disinformation, such as The Houston Post, which was cited in the previously mentioned BBC reporting as the origin of disinformation recently targeting American voters.
Google deployed a widescale prebunking campaign educating millions of European voters in the leadup to this year’s European Union elections. They reportedly have no plans to launch similar prebunking campaigns in the U.S. because of broader fears that the practice could become politicized.
While focusing on common mis- and disinformation narratives appears too controversial, it is unclear why this controversy should extend to public education campaigns focused on tactics. Specific narratives may be partisan, but tactics — such as the deceptive use of fictitious news websites — are employed by propagandists across the political spectrum.
As policymakers work to enhance the research community’s ability to understand how these tactics are evolving, policymakers should start by deploying prebunking campaigns focused on teaching the public about known disinformation tactics.
R. Gordon Rinderknecht is an associate behavioral scientist at RAND, a nonprofit, nonpartisan research institution.