The project, dubbed “Spitting Images,” charts only deepfakes that have gained significant traction or been debunked by journalists.
It comes in a historic year for elections. More than half the globe has or will hold elections in 2024, a detail that collides with a surge in AI-created audio, video and images often providing false narratives and information about candidates.
The U.S. has already seen its fair share of AI-generated content targeting the election, from fake audio claiming to be President Biden encouraging New Hampshire voters to skip the primary to former President Trump posting fake images of Taylor Swift falsely suggesting the singer endorsed him.
Lindsay Gorman, the project’s lead, told The Hill she is hopeful the tool will also spot trends that can help policymakers weigh how to regulate the use of artificial intelligence in elections.
“We wanted to understand, how is [AI] actually being deployed in the real world over this historic election year? And for policymakers that are thinking through potential legislation or potential guardrails on artificial intelligence — particularly around political AI — should they have transparency requirements when it comes to politicians and elections? Where should they be focusing their efforts?” Gorman said.
The tracker has charted 133 deepfakes released in more than 30 different countries.
A few trends have clearly emerged, Gorman said, including a reliance on audio deepfakes, which accounted for almost 70 percent of tracked cases.
“The fact is that the current state of the technology is just not that convincing when it comes to images and videos, but it is when it comes to audio. It’s very difficult to tell when something’s been AI-generated,” she said.
Read more in a full report at TheHill.com.