The IWF, a U.K.-based organization responsible for removing images from the internet that exploit children, said it found nearly 3,000 AI-generated images in one month that breached national law depicting child sexual abuse.
The majority of the images looked so realistic that they were treated as real, and some would “be difficult for trained analysts to distinguish from actual photographs,” our colleague Lauren Irwin reported.
“Earlier this year, we warned AI imagery would soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers,” Susie Hargreaves, chief executive of IWF, said in a statement.
“We have now passed that point,” she said.
The images are likely based on the real faces and bodies of children that were then built into AI models, according to the IWF.
The organization said it is worried that as AI-generated images become more common, they could distract analysts and take resources away from real cases.
Read more in a full report at TheHill.com.