Parents whose children have been seriously harmed or lost their lives due to social media have been let down by a key House panel’s advancement of a watered-down version of the Kids Online Safety Act, which prioritizes the interests of tech companies over the needs of grieving families.
The new version significantly dilutes two key provisions that could have made this one of the most ambitious pieces of legislation on tech design and safety. With the House in recess until Nov. 12, Congress must reconsider the original intent of this bill and collaborate to ensure that the strongest possible version of the bill is passed before the end of the year. Failing to do so risks absolving tech companies of their responsibility, while depriving parents of the justice they have long awaited.
The Kids Online Safety Act was crafted because social media platforms have created a technology that is addictive — and consequentially harmful for children — by design: the endless scrolling, the intermittent rewards, the dopamine hits. Even studies by social media companies themselves reveal that platforms like Facebook, Snapchat and TikTok intentionally use addictive features to keep users engaged.
In fact, these addictive features were the very reason why the infamous and viral TikTok Blackout Challenge killed 20 children within 18 months. This deadly challenge was viciously spreading through the app due to the nature of its opaque algorithmic design, specifically targeting children.
The bill passed in June in the Senate by an overwhelming majority would have addressed this exact problem. Aimed at establishing a “duty of care” for online platforms, including social media, the bill required Big Tech to exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate harms, such as mental health issues, addiction and sexual exploitation, among other problems.
That version of the Kids Online Safety Act would have made a difference because it would have regulated design features that shape user behavior. As the bill stated, social platforms will be required to “exercise reasonable care in the creation and implementation of any design feature” that encourages and increases time spent online by minors. Examples include infinite scrolling or autoplay, notifications, rewards for time spent on the platform, and personalized recommendation systems, among others.
Unfortunately, the latest version of the bill addressed in the House undermines the key provisions that make the Kids Online Safety Act so comprehensive.
First, it removes the responsibility of tech companies to prevent and mitigate features that create mental health harms, including anxiety, depression, eating disorders and suicide, making the bill now practically all about physical harm.
Second, it significantly limits the provisions requiring technology companies to design products that mitigate compulsive user behaviour by minors. It does so by limiting the definition of compulsive use to cases where use severely impairs major life activities, such as hearing, walking, sleeping, thinking and working, as legally described by the Americans with Disabilities Act of 1990. Therefore, the bill is no longer equipped to protect children who spend all night glued to social media and still manages to go to school — only children who have been severely impaired.
This invites the question: Who is this bill for?
For tech lobbyists, these new changes reflect the demands they’ve been pushing for all along. Many argue that the bill’s provisions restrict certain kinds of free speech and violate the First Amendment. But this argument rings hollow, especially coming from the same companies that exploited children’s vulnerabilities.
Opponents who say that the Kids Online Safety Act infringes on First Amendment rights fail to recognize that it regulates design features that shape user behavior, not the content or speech itself. The Kids Online Safety Act does not target what is expressed on social media platforms. It is not about empowering the Federal Trade Commission or attorneys general to remove content that is dubbed “harmful.” Instead, the regulation targets the features that condition online user behavior: how frequently they access digital products, and the stopping points (or lack thereof) built into those products to protect against overuse.
Weakening the Kids Online Safety Act to appease tech companies fails to solve the bigger problem. Congress must remember who this legislation is for. It is not to safeguard tech companies, but to protect children. If Congress passes a version of the Kids Online Safety Act that remains focused on design regulation and safety, tech companies will be held responsible and accountable for the harm they have done.
Only then will parents finally get the justice they have been long waiting for.
Gaia Bernstein is a law professor, the founding director of the Institute for Privacy Protection at the Seton Hall University School of Law and the author of “Unwired: Gaining Control over Addictive Technologies.” She is also a member of Human Change, a global platform raising the alarm about the negative impacts technology is having on children’s mental health.