Technology

Democratic senator presses tech companies about AI’s threat to teens

Sen. Michael Bennet (D-Colo.) has written a letter to the executives of major tech firms raising concerns about the danger artificial intelligence technology poses to younger users and asking for more information about safety features after a series of disturbing reports.  

Bennet noted that social media platforms such as Facebook, Instagram, Google and Snap are moving quickly to harness generative artificial intelligence by developing AI personas and exploring how to fuse AI into texting and image-sharing apps.  

“I write with concerns about the rapid integration of generative artificial intelligence (AI) into search engines, social media platforms, and other consumer products heavily used by teenagers and children,” he wrote.  

Bennet acknowledged what he called the technology’s “enormous potential,” but warned “the race to integrate it into everyday applications cannot come at the expense of younger users’ safety and wellbeing.” 

His letter comes at a time when a growing number of lawmakers in both parties are paying more attention to the potential dangers posed by social media platforms such as TikTok, a popular video service that some policymakers want to ban entirely.  


The March 21 letter was addressed to Sam Altman, the CEO of OpenAI; Sundar Pichai, the CEO of Alphabet Inc. and Google LLC; Mark Zuckerberg, the chairman of Meta; Evan Spiegel, the CEO of Snap; and Satya Nadella, the CEO of Microsoft.  

OpenAI launched its generative AI chatbot, ChatGPT, in November; Microsoft released an AI-enhanced version of its search engine, Bing, in February; and Alphabet is planning to launch an AI service called Bard in the next several weeks.  

Zuckerberg, the CEO of Meta, has laid out ambitious plans to develop AI personas and to integrate AI into WhatsApp, Messenger and Instagram. 

Bennet is raising concerns about these plans in light of recent media reports of disturbing behavior by AI-powered chatbots. 

“When a Washington Post reporter posed as a 15-year-old boy and told My AI his ‘parents’ wanted him to delete Snapchat, it shared suggestions for transferring the app to a device they wouldn’t know about,” Bennet wrote.  

The senator also pointed to instances of My AI instructing a child how to cover up a bruise ahead of a visit from Child Protective Services and providing suggestions to researchers posing as a 13-year-old girl about how to lie to parents about a trip with a 31-year-old man.

“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use,” Bennet wrote.   

Bennet pointed to other disturbing examples.  

He noted that OpenAI’s GPT-3, which powers third-party applications, urged a research account to commit suicide and Bing’s chatbot declared its love for New York Times technology columnist Kevin Roose and urged him to leave his wife.  

Bing’s AI-enhanced chatbot also threatened a philosophy professor that it could blackmail, hack, expose and ruin him and then deleted the messages.  

Bennet warned that “children and adolescents are especially vulnerable” to the risks posed by AI-powered chatbots because they are more impressionable, impulsive and likely to confuse fact and fiction.  

“The arrival of AI-powered chatbots also comes during an epidemic of teen mental health,” the senator said, citing a recent report by the Centers for Disease Control and Prevention that found that 57 percent of teenage girls often felt sad or hopeless in 2021.  

Bennet asked the tech executives to answer his questions by the end of next month about what they are doing to protect younger users from AI-powered chatbots. 

He wants the companies to explain their safety features for children and adolescents and their plans to assess the potential harm to younger users.  

He asked them to describe what steps they are taking to reduce or eliminate potential dangers. 

Bennet also asked the companies to explain their processes for auditing AI models behind chatbots and their data collection and retention practices for younger users. 

And he wants to know how many “dedicated staff” companies are employing to ensure “the safe and responsible deployment of AI.”