The views expressed by contributors are their own and not the view of The Hill

Congressional CEO grillings can’t solve disinformation: We need a public interest regulator

Last Thursday, the House Energy and Commerce Committee held yet another hearing to grill social media CEOs about the mis- and disinformation on their platforms. While much of the hearing consisted of members soliciting soundbites from the executives on everything from social media’s impacts on children to alleged anti-Conservative bias, one statement from Rep. Mike Doyle (D-Pa.) cut to the heart of the issue and incidentally highlighted exactly why hearings are an insufficient replacement for genuine oversight of the platforms.

“Time after time, you are picking engagement and profit over the health and safety of your users, our nation, and our democracy,” Doyle said.

While hearings may make good political theater, they don’t get us any closer to solving this problem. That’s because more urgent action is necessary to force companies to address the tension between supporting a healthy information space, protecting free speech, and profits. To make sure this happens, Congress needs to step up and empower an independent expert regulator to serve in the public interest in overseeing major platform companies.

Since the aftermath of the 2016 election, when social media representatives were first brought before Congress to atone for failing to address — and at times even acknowledge — foreign interference on their platforms, trips to Capitol Hill have become a regular occurrence. For Mark Zuckerberg, Thursday was his fourth hearing since July. For Jack Dorsey and Sundar Pichai, it was their third. 

As in the past, companies touted efforts to counter the threat, parading selective self-assessments and seemingly impressive metrics — Zuckerberg reported that Facebook had removed “over 12 million pieces of false content about COVID-19” and attached warning labels “to more than 150 million pieces of content” around the 2020 election, while Pichai asserted that YouTube’s election information disclaimers had “been shown over 8 billion times.”

But these metrics ignore the real issue behind mis- and disinformation on the platforms; the architecture of the sites themselves. Meanwhile, the lack of genuine oversight and regulation of social media has created an environment of self-regulation and reporting that has proven problematic.

As a new report from the Alliance for Securing Democracy reveals, despite numerous new policies to counter the spread of false information in the leadup to the 2020 election, platforms for the tough questions. In the wake of the election, Facebook, Twitter, and YouTube were caught flat-footed by a domestic disinformation campaign targeting the integrity of the voting process.

Facebook attempted a quick response to the spread of false narratives, removing some related groups and implementing “break-glass tools” to promote authoritative content. Meanwhile, Twitter attached labels to 300,000 tweets spreading false information in the two-weeks around the election. YouTube took down content that misled voters about where or how to vote and announced after the safe harbor deadline for states to certify the election — more than a month after voting day — that it would start removing false claims about the election uploaded to its platform.

These efforts were too little, too late.

False election narratives continued to circulate widely on Facebook, Twitter, and YouTube, while groups engaged with the content organized rallies online that eventually culminated in the storming of the U.S. Capitol on Jan. 6, 2021.

Meanwhile, by mid-December Facebook and Twitter took steps in the wrong direction, rolling back changes to platform architecture that were designed to mitigate the spread of false information. YouTube continued to slide under the radar, drawing comparatively less attention despite the ongoing spread of false narratives on its platform.

The reversal and failure to commit to these changes to platform architecture points to a more fundamental dynamic that underlies why congressional hearings and self-regulation are not an adequate replacement for genuine oversight of social media companies.

While platforms like Facebook assert that addressing misinformation is in their best interest, this may not be the case. Social media platforms are designed to spread engaging, rather than authoritative content. And conspiratorial and false content drive engagement and keep users scrolling, clicking, and viewing more ads. Reducing the spread of these narratives improves online conversation, but also threatens to decrease engagement and therefore jeopardize growth. 

Political criticisms of bias further disincentivize platforms from taking steps to shift architecture. Though these accusations mostly lack evidence, partisan outlets that would see a decrease in attention from changes that elevate authoritative content are quick to criticize the companies when they take steps in that direction — and their criticisms often gain support from members of Congress.

Whether or not platforms make earnest attempts to navigate this dynamic is irrelevant. The fact is that business incentives are at odds with the public good — and without actual oversight, users are caught in the middle. Platform attempts to address the challenge have proven insufficient, while self-declared policy changes and assessments have not held up to outside scrutiny.

To change this status quo, Congress needs to take concrete action, empowering expert oversight in the form of an independent regulator for large social media companies.

This body could act to protect consumers and promote a healthy information space by providing audits of platform architecture to facilitate greater transparency and by overseeing company attempts to counter mis- and disinformation.

In lieu of self-assessments, it could mandate information sharing with researchers to study platform policy changes and could require more transparent communication with the public around platform changes and decisions.

Importantly, the regulator should not be able to dictate to platforms what is or is not acceptable speech but should instead ensure that platforms are not designed to manipulate the public.

While the creation of such a body would be difficult given the polarized state of Congress, decisive action is necessary, as four years of regular hearings have produced little results.

Brad Hanlon is a research analyst at the Alliance for Securing Democracy, where he focuses on authoritarian information operations and disinformation.

Tags CEOs Disinformation Facebook government oversight Internet manipulation and propaganda Jack Dorsey Mark Zuckerberg Mike Doyle misinformation Propaganda techniques Public interest law Social media social media platforms social media regulations Sundar Pichai Twitter YouTube

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴

More Technology News

See All
Main Area Middle ↴
See all Hill.TV See all Video
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more

Video

See all Video