The views expressed by contributors are their own and not the view of The Hill

Is it time to go back to the future on internet content regulation?


It’s no secret that governments are actively trying to manage internet content. Committees of the Congress, the EU, as well as democratic and authoritarian governments everywhere have been seeking new laws directly controlling internet content.

Recent efforts to directly control internet content parallel — but are actually quite different from — efforts to remove the immunities that internet platforms have enjoyed from responsibility for third party postings since the 1990s. They are also quite different from civic efforts in many countries to encourage or coerce platforms to remove unwanted content.

These new efforts seek to directly regulate internet content.

For example, only within the past few weeks, Russian courts imposed fines on several internet platforms for their failure to remove “illegal content,” the Cyberspace Administration of China imposed stiff fines on Chinese platforms for allowing “illegal content,” and even the government of free speech-loving Sweden has created a new government agency to control internet “disinformation.”

Direct government regulation (some would say censorship) of internet content is not new. What is new is its rapid expansion into topics that were avoided in the past.

At the dawn of the internet age, in the mid 1990s, governments tended towards a hands-off approach to controlling internet content for several reasons. The internet was comparatively modest and fractured into many small chatrooms, websites, merchants, etc., none of which exercised much influence; it had grown from libertarian roots in which absolute free speech was venerated; and it was eclipsed by online networks like France’s Minitel and America’s AOL/Prodigy/CompuServe, which could — and did — exercise control over content that they allowed on their platforms.

Under these circumstances, governments worldwide tended to focus on laws regulating internet copyright infringement and internet child pornography/pedophilia because these materials were similarly illegal everywhere and because influential groups lobbied strenuously for laws clarifying that child porn and copyright infringement on the internet were against the law.

In contrast, different countries, provinces, courts, and regulators in hundreds of different jurisdictions had different ideas on what else should be illegal on the internet; moreover, much of the content posted on internet platforms was (and is) anonymous, so for governments to prosecute anonymous posters from 200 different countries seemed almost impossible. Thus, for the most part, with two exceptions, governments stayed away from trying to directly regulate material on the internet.

Into this 1990s void emerged a now-overlooked approach to managing internet content that deserves to be considered: content labelling and filtering.

The idea of content labeling is familiar to anyone looking at movie ratings such as “PG13,” where the labeling is done by the studios and the filtering by an adult. The technique has been used in content ranging from video games to TV programs. Its theoretical benefit is that it allows legal content to circulate globally while leaving the control to the end users; if — and only if — the creator/poster honestly labels their content and the user has access to effective controls. 

With all of the limits on its viability, internet content labeling/filtering has some advantages over either local (constantly changing) government regulations or constantly changing platform owner regulation of content. Government regulation of internet content cannot avoid a patchwork of thousands of constantly changing regulations over what is illegal.  Content controlled by platform owners cannot avoid placing enormous influence and control in the hands of a small number of platform owners.

So why did content labeling and filtering recede?

As the internet grew and consolidated into a few large platforms used by billions, governments have been under great pressure to expand their control over content. At the same time, the early chorus of libertarian free speech advocates has been overwhelmed by people and interests complaining about internet content and calling for its regulation. Perhaps most important, the emergence of global internet platforms has given governments effective tools to control large parts of internet content regardless of where the content originated or whether the person posting it was anonymous.

The governments of some countries have always managed internet content by excluding uncontrolled platforms and websites and through an intricate network of influence over domestic operators. But in recent years (and again, separate from removing the immunity for responsibility for third party content), some influential governments have stepped into direct regulation of content. Nowhere has this been more important than Germany’s 2017 Network Enforcement Act, which established a new standard of explicit government control over “hate speech’ posted on the internet; the law didn’t make any new content illegal as much as it made clear that large platforms would be liable/responsible for whatever hate speech the government instructed platforms to remove.

Within two years, seven countries — including Russia, Australia, Singapore, Vietnam, and Kenya — enacted comparable laws regulating their own versions of undesirable internet content, and — not long after — the EU, India and China followed suit. The past few years have seen an explosion in new laws and regulations asserting control either within that government’s territory or globally.

Most of these new regulations simply extended to the internet laws that previously made content illegal in older media — through the new tool of regulating the large internet platforms that facilitated the illegal content. But some, such as outlawing internet “disinformation” or internet materials classified as “demeaning,” may break new ground.

Where this new round of direct governmental regulation of content leads is unclear. There is little possibility that major governments will globally standardize their detailed definitions of what content is illegal (as they did decades ago with copyright infringement and child pornography.) And it’s unlikely that platform owners’ sole control over global internet content will satisfy the governments that don’t get their way.

So, it may be time to reconsider content labeling and filtering, which — in combination with artificial intelligence — might offer a third way to address the increasingly difficult issues of internet content.

Roger Cochetti provides consulting and advisory services in Washington, D.C.  He was a senior executive with Communications Satellite Corporation (COMSAT) from 1981 through 1994. He also directed internet public policy for IBM from 1994 through 2000 and later served as Senior Vice-President & Chief Policy Officer for VeriSign and Group Policy Director for CompTIA. He served on the State Department’s Advisory Committee on International Communications and Information Policy during the Bush and Obama administrations, has testified on internet policy issues numerous times and served on advisory committees to the FTC and various UN agencies. He is the author of the Mobile Satellite Communications Handbook.

Tags big tech platform content content moderation Copyright infringement Freedom of speech Internet Internet censorship IT law Publishing

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video