Law enforcement struggling to prosecute AI-generated child pornography, asks Congress to act
Law enforcement is struggling to prosecute abusive, sexually explicit images of minors created by artificial intelligence (AI), Rep. Anna Paulina Luna (R-Fla.) told fellow members at a House Oversight subcommittee hearing Tuesday.
Laws against child sexual abuse material (CSAM) require “an actual photo, a real photograph, of a child, to be prosecuted,” Carl Szabo, vice president of nonprofit NetChoice, told lawmakers. With generative AI, average photos of minors are being turned into fictitious but explicit content.
“Bad actors are taking photographs of minors, using AI to modify into sexually compromising positions, and then escaping the letter of the law, not the purpose of the law but the letter of the law,” Szabo said.
Attorneys general from all 50 states wrote a bipartisan letter urging Congress to “study the means and methods of [AI] used to exploit children” and to “propose solutions to deter and address such exploitation to protect America’s children.”
The letter called on Congress to “explicitly cover AI-generated CSAM” to enable prosecutors.
“This is actually something that the FBI, in talking to them about cybercrimes, asked us to specifically look up because they are having issues currently prosecuting these really gross, sick individuals; because, technically, a child is not hurt in the process, because it is a generated image,” Luna said.
The Hill has reached out to the FBI for comment.
Although AI-generated CSAM currently represents a small portion of the abusive content circulating online, the ease of use, versatility and highly realistic nature of AI programs mean their use for CSAM will likely grow, John Shehan, vice president of the Exploited Children Division at the National Center for Missing & Exploited Children (NCMEC), said.
Lawmakers and witnesses frequently cited research from the Stanford Internet Observatory, which found generative AI is enabling the creation of more CSAM, and that training data for publicly available AI models have been tainted with CSAM.
NCMEC provides “the nation’s centralized reporting system for the online exploitation of children,” known as the CyberTipline. Only five generative AI companies have submitted reports to the tip line to date, despite an “explosion” in the number of apps or services available, according to Shehan.
“State and local law enforcement are having to deal with these issues, because the technology companies are not taking the steps on the front end to build these tools with safety by design,” he said.
Shehan also noted “nudifying” or “declothing” AI applications or web services were especially egregious, with regard to the generation of CSAM.
“None of the platforms that offer ‘nudify’ or ‘unclothe’ apps have registered to report to NCMEC’s CyberTipline; none have engaged with NCMEC regarding how to avoid creation of sexually exploitative and nude content of children and none have submitted reports to NCMEC’s CyberTipline,” he said.
“The sheer volume of CyberTips has often prevented law enforcement from pursuing proactive investigations at first that would efficiently target the most egregious offenders,” Rep. Nick Langworthy (R-N.Y.) said.
“In only a three-month period from November 1, 2022, to February 1, 2023, there were over 99,000 IP addresses throughout the United States that distributed known CSAM, and only 782 were investigated. Currently, law enforcement, through no fault of their own, they just don’t have the ability to investigate, prosecute the overwhelming number of these cases,” Langworthy added, referring to information from previous testimony by John Pizzuro, CEO of nonprofit Raven, during a February 2023 Senate Judiciary hearing on protecting children online.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..