The views expressed by contributors are their own and not the view of The Hill

The metaverse is the world’s strongest argument for social media regulation

ASSOCIATED PRESS/ Wally Santana

The challenge of policing the metaverse illustrates the need for government to regulate social media.

The metaverse refers to an immersive virtual reality (VR) environment where users, appearing as life-like avatars, can interact in three-dimensional spaces that mimic the real world. Mark Zuckerberg is so sure that the metaverse eventually will eclipse existing social media platforms that he changed the name of his company from Facebook to Meta Platforms and is spending $10 billion this year alone to develop VR headsets, software and content. 

In addition to a host of technological hurdles, the metaverse presents new questions about the rules governing what users say and do online and how the rules will be enforced. Already there are reports of abuse. Meta acknowledged in December that a woman beta-testing Horizon World, the company’s early-version VR space, had complained that her avatar had been groped by another user. 

How will Meta and other tech companies respond to such incidents, especially when millions of people — perhaps hundreds of millions — are simultaneously gathering and interacting in a potentially endless array of metaverse scenarios? The combination of automated content-moderation systems and human review deployed by existing platforms to police text and images almost certainly would not be up to the task. 

That’s where government regulation comes in. 

Spurred by the industry’s failure to sufficiently self-regulate existing two-dimensional iterations of social media, lawmakers have proposed dozens of bills to rein in the industry. The need for better oversight is palpable: In the wake of the 2020 presidential election, Facebook groups kindled baseless claims of rigged voting machines and phony ballots that fueled the Jan. 6, 2021, insurrection. According to an international network of digital fact-checking groups, YouTube provides “one of the major conduits of online disinformation and misinformation worldwide,” amplifying hate speech against vulnerable groups, undermining vaccination campaigns and propping up authoritarians from Brazil to the Philippines. Twitter has fostered a “disinformation-for-hire industry” in Kenya, stoked civil war in Ethiopia, and spread “fake news” in Nigeria. 

A number of the bills pending before Congress offer worthy ideas that would require social media companies to disclose more about how they moderate content; other legislation would make it easier to hold platforms accountable via lawsuits. Unfortunately, most of the bills are too fragmentary to get the job done. 

In a recently published white paper, the NYU Stern Center for Business and Human Rights offers principles and policies that could shape a more comprehensive approach, incorporating the most promising provisions from existing legislation. The center urges Congress, as a first step, to create a dedicated, well-funded digital bureau within the Federal Trade Commission, which, for the first time, would exercise sustained oversight of social media companies. 

Lawmakers should empower the FTC’s digital bureau to enforce a new mandate as a part of the agency’s mission to protect consumers from “unfair or deceptive” corporate behavior. First, platforms would have to maintain procedurally adequate content moderation systems. Such systems would have to deliver on promises the platforms make in their terms of service and community standards about protecting users from harmful content. Subject to FTC fine-tuning, procedural adequacy would entail clearly articulated rules, enforcement practices, and means for user appeals. 

Second, Congress ought to direct the FTC to enforce new transparency requirements. These should include disclosure of how algorithms rank, recommend and remove content, as well as data about how and why certain harmful content goes viral. To avoid impinging on free speech rights protected by the First Amendment, the FTC should neither set substantive content policy nor get involved in decisions to remove posts or accounts or leave them up. 

The sheer scale of platforms like Facebook, Instagram, YouTube, and TikTok — untold billions of posts from billions of users — means that some unwelcome content will spread, no matter what safeguards are put in place. But with a spotlight on some of their inner workings and new obligations to conduct procedurally adequate moderation, social media companies would have a strong incentive to patrol their platforms more vigilantly. 

Congress and the FTC must start building regulatory capacity now because the need for it will only grow when the metaverse arrives full force. Meta and other social media companies should be required to explain publicly how they will detect and respond to VR gatherings where white supremacists, anti-Semites, or Islamophobes trade hateful rhetoric, or worse. Can artificial intelligence “listen” to conversations that would, if rendered in text, be removed from Facebook? Would Meta employees parachute into metaverse spaces to eavesdrop? Such snooping would present an obvious threat to user privacy, but how else would a company interpret body language and other context that distinguish dangerous calls for extremist acts from mere hyperbole or satire? 

And what about the woman whose avatar was sexually assaulted in Meta’s prototype Horizon World? The company called the episode “absolutely unfortunate.” It said she should have used a feature called “Safe Zone,” which allows users to activate a protective bubble that stops anyone from touching or talking to them. More generally, it appears that Meta is relying primarily on users in early-stage VR spaces to report infractions or block antagonists themselves. 

BuzzFeed News recently did an experiment where reporters set up a Horizon World space they called “Qniverse” and decorated it with claims that Meta has promised to remove from Facebook and Instagram, including: “vaccines cause autism,” “COVID is a hoax,” and the QAnon slogan “where we go one we go all.” Over more than 48 hours, Meta’s content moderation system didn’t act against the conspiracy-minded misinformation zone, even after two BuzzFeed journalists separately reported the infractions. One of the complainants received a response from the company that said, “Our trained safety specialist reviewed your report and determined that the content in the Qniverse doesn’t violate our Content in VR Policy.” 

The BuzzFeed journalists then disclosed the situation to Meta’s communications department, an avenue not available to ordinary users. The next day, the Qniverse disappeared. A spokesman told BuzzFeed that the company acted “after further review” but declined to explain the episode or answer questions about Meta’s plans for overseeing the metaverse more generally. 

FTC oversight could, at a minimum, require Meta and other companies to explain how machines and humans are supposed to keep today’s platforms and future VR realms safe — and whether these measures succeed. 

Paul M. Barrett is deputy director of the NYU Stern Center for Business and Human Rights and an adjunct professor at the NYU School of Law.

Tags Facebook Instagram Internet culture Mark Zuckerberg Meta Platforms Metaverse Virtual reality

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴
Main Area Middle ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more

Video

See all Video