It’s time to bring the public interest standard to digital platforms
Last week Sen. Michael Bennet (D-Colo.) introduced the Digital Platform Commission Act, which the press release described as “the first-ever legislation in Congress to create an expert federal body empowered to provide comprehensive, sector-specific regulation of digital platforms.”
Calls for such a regulatory body have been circulating for years, with many analysts debating whether the nature and scope of the concerns raised by digital platforms fit neatly within the regulatory authority or expertise of existing regulatory bodies such as the Federal Trade Commission or the Federal Communications Commission.
One particularly notable aspect of the proposed legislation is that it brings to bear a regulatory concept that has remained on the margins of most policy deliberations about digital platforms, but is long overdue to be a focal point of how platform regulation should move forward — the notion of the “public interest, convenience, and necessity.”
The public interest principle appears four times in the Digital Platform Commission Act. Specifically, the act directs the new commission to regulate digital platforms to ensure their operations remain consistent with the public interest. It also calls for the creation of transparency and disclosure obligations to enable trusted third-party research and analysis consistent with the public interest.
To understand what a public interest model for digital platform regulation might look like, it’s helpful to revisit the concept’s history as a framework for broadcast regulation.
The public interest principle has been the lynchpin of broadcast regulation since the creation of the Federal Radio Commission (later renamed the Federal Communications Commission) in 1927. The public interest regulatory model is built on the notion that certain entities (e.g., broadcasters) merit designation as public trustees, and as such can be subject to government-mandated public interest obligations directed at meeting core policy goals such as protecting children, preserving and promoting competition, providing access to diverse sources and content, and cultivating an informed citizenry.
Importantly, meeting these public interest obligations has often infringed, to some degree, on the speech rights of broadcasters. Nonetheless, the Supreme Court has found this regulatory framework to be consistent with the First Amendment.
However, the general consensus over the past decade and a half has been that there is little that digital platform regulation can learn or borrow from broadcast regulation. According to this line of thinking, broadcast media and digital platforms are too fundamentally different for similar regulatory frameworks to be employed. In the earliest days of my work on a book that would ultimately end up being titled “Social Media and the Public Interest,” I recall being told by a media law scholar whom I admire that the public interest was too much of an “old media term” to be relevant to contemporary discussions about digital platform regulation.
In the years since this conversation, we have seen digital platforms replicate many of the harms and concerns that motivated Congress to impose the public interest regulatory framework on broadcasting. Propaganda, disinformation (both medical and political), and content harmful to minors — all of these concerns that are front and center in current debates about digital platform regulation were similarly key drivers of the origins and evolution of broadcast regulation.
Perhaps the divide between “old” and “new” media is not as vast as we’ve been led to believe.
There has, of course, been plenty to criticize about how the public interest standard has been defined and applied throughout the history of broadcast regulation. Some have argued that the principle has remained vague to the point of being meaningless. Others have argued that it has, at times, been too strongly associated with industry interests; or that it has been reduced to a reflection of consumer demand. We certainly don’t want to romanticize the history of public interest regulation in broadcasting.
But nor do we want to ignore it. We can certainly learn from this checkered history. We can revisit it, and try to deduce from this history those aspects of this regulatory framework that might be relevant to how we approach digital platform regulation and those aspects that might not.
The Digital Platform Commission Act opens the door to a much larger — and long-overdue — conversation about how our legacy model of media regulation can potentially be adapted to modern digital platforms. Through the concept of the public interest, we can begin to construct a much-needed regulatory framework that treats large digital platforms as public trustees, with, as the act specifies, a “duty of care” to the millions of citizens who rely on these platforms for news and information, and who grant these platforms access to unprecedented quantities of their personal data.
As much as we have been inclined to treat these digital platforms that have dramatically transformed our media ecosystem as new and unprecedented, the path forward in regulating them may actually be found in looking back at how we’ve regulated our media in the past.
Philip M. Napoli is the James R. Shepley Professor of Public Policy in the Sanford School of Public Policy at Duke University, and Director of the DeWitt Wallace Center for Media & Democracy. His most recent book is “Social Media and the Public Interest: Media Regulation in the Disinformation Age.”
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..