The views expressed by contributors are their own and not the view of The Hill

It’s almost too late to protect our elections from AI — Congress must act now 

The 2024 general election is nearly upon us, and artificial intelligence is already playing a significant role. This emerging technology is changing how we communicate with one another — as well as creating new opportunities to influence and manipulate voters.  

Bad actors can use AI to create deceptive audio or visual content — sometimes called a “deepfake” — that convincingly portrays a false or distorted reality. AI can mislead voters about what a person has said or done, even showing them events that have never happened. We’ve already seen deepfakes being used to spread falsehoods about candidates, and it’s likely we’ll see more before the 2024 election is over. 

For example, ahead of New Hampshire’s 2024 presidential primary, robocalls featuring an AI-generated message mimicking President Biden’s voice told would-be Democratic voters not to vote in the primary because doing so would preclude them from voting in the general election, which is patently untrue.  

Likewise, on the eve of a hotly contested mayoral election in Chicago last year, an anonymous social media account posted a deepfake message falsely depicting a candidate saying, “back in my day, cops would kill 17 or 18 people and nobody would bat an eye.” The post went viral before it was taken down, and the targeted candidate lost the election by fewer than 26,500 votes, or 4.5 percent of the votes cast.  

These examples may soon appear amateurish as this nascent technology continues to advance. As AI becomes more widespread, the potential for electoral mischief will only grow.  


Politicians may not be the only ones falsely depicted or targeted: AI could be used to falsely portray an influential person, like Taylor Swift or Elon Musk, encouraging their massive audience to support a specific candidate or misrepresenting where or how to cast a ballot.  

This threat doesn’t end on Election Day, either: AI-generated deepfakes could be used to falsely show election workers throwing away ballots or bringing in fake ballots to alter election results, in a bid to inflame people and even perpetrate post-election violence like we saw during the January 6 attack on the U.S. Capitol. 

It is imperative that Congress establish guardrails around this essentially unregulated area now — not after a disastrous electoral event has taken place. Federal regulatory agencies have taken some useful, albeit small, steps in the right direction. But their limited efforts are woefully inadequate to meet the challenge at hand.  

For instance, the Federal Election Commission, the agency primarily responsible for ensuring the integrity of federal elections, just issued an “interpretive rule” effectively confirming that a decades-old law prohibiting “fraudulent misrepresentation of campaign authority” — including falsely claiming to speak for or on behalf of a candidate in a way that is “damaging” to them — applies whether or not a bad actor uses AI.  

While that guidance is helpful, it doesn’t come close to addressing the myriad election-related dangers that AI presents. 

Meanwhile, the Federal Communications Commission (FCC), which regulates broadcasters like TV and radio stations, is in the midst of crafting a regulation to require broadcasters to provide disclaimers on any political ad that uses AI. Such disclaimers would make voters aware when an ad they’re seeing uses AI to distort reality, which is a great idea. But the FCC can only apply this rule to radio and TV ads — in an era where a huge chunk of political advertising appears online, on apps, or via streaming video, regulating radio and TV alone won’t cut it. 

What’s needed are new federal laws that, at a minimum, accomplish two things: prohibit ads that fraudulently deceive and manipulate voters, and require disclaimers on political ads that use AI regardless of the medium used to distribute those ads.  

First, Congress should ban the use of deepfakes in ads featuring candidates or election workers. There is no place in our democracy for outright lies and manipulation about our elections. While the First Amendment protects free expression, it does not protect fraudulent speech.  

Second, Congress should require disclaimers on political ads appearing in any medium, which inform viewers when the ad’s content has been materially created, altered or disseminated with AI. Disclaimers on the face of a communication will put voters on notice that something they are seeing or hearing has been altered, giving them the opportunity to investigate the changes, or at least treat the message with the required degree of skepticism.  

The good news is that Congress is currently considering several bills that would accomplish these goals: the Protect Elections from Deceptive AI Act, the AI Transparency in Elections Act, the Preparing Election Administrators for AI Act, the Fraudulent Artificial Intelligence Regulations (FAIR) Elections Act, and the AI Ads Act

The less good news is that time is running extremely short for Congress to act before the 2024 election.  

Over the next few weeks, Americans will continue to be inundated with political ads every time they turn on their TVs, browse the internet, or open their favorite social media apps. For decades, voters have had to parse such ads to cut through the political spin and decide who to vote for. Now, thanks to AI, they’ll have to decide whether what they’re seeing or hearing is even real.

That’s asking a lot to exercise the most basic right of democratic participation, and there’s a genuine risk that frustrated and distrustful Americans will simply tune out and turn away. 

Congress shouldn’t allow that to happen. It should act now to prevent AI from undermining our democracy, in 2024 and well beyond.  

Saurav Ghosh serves as director for Federal Campaign Finance Reform at the nonpartisan Campaign Legal Center. Previously, he served in the Office of General Counsel at the Federal Election Commission, investigating alleged violations in dozens of campaign finance matters.