The views expressed by contributors are their own and not the view of The Hill

Yes, we should regulate AI-generated political ads — but don’t stop there

Ahead of the 2024 U.S. election, the Federal Election Commission (FEC) is considering whether and how to regulate political ads generated by artificial intelligence (AI), also known as deepfakes.

Last week, the FEC advanced a petition calling for regulation of political ads that specifically use generative AI to misrepresent what a candidate said or did. This is not a hypothetical issue — campaigns have already begun to use generative AI in their ads.

Deepfakes are images, audio clips and videos that have been automatically synthesized by an AI-powered system. From only a single text prompt, for example, generative AI can create a hyper-realistic image of anything from a fake bombing at the Pentagon to Pope Francis in a puffy coat. From only a few minutes of audio, generative AI can clone a candidate’s voice, allowing its creator to generate a recording of a candidate saying just about anything.

And generative AI can insert a person’s likeness into anything, ranging from a blockbuster Hollywood movie to sexually explicit material.

The power of generative AI, when coupled with the reach and speed of social media’s distribution and amplification, is a real threat to an information ecosystem already polluted with half-truths, lies and conspiracies. Deepfakes, however, are only the latest in a long line of techniques used to manipulate reality. In considering the role of the use of AI in political ads, the FEC should take a broader view and consider both past and emerging technologies.

Well before there was generative AI, dictators, media outlets and political opponents managed to distort the photographic record. As early as the 1930s, Joseph Stalin was air-brushing purged officials out of photographs; National Geographic digitally nudged the great pyramid of Egypt to fit on its February 1982 cover; a photo of 2004 presidential candidate John Kerry was manipulated to show him sharing a stage with Jane Fonda; and a 2020 video of former Speaker Nancy Pelosi (D-Calif.) was slowed to make her sound intoxicated during a public speech — all without the power of AI.

If the FEC believes that manipulating the photographic record in campaigns should be banned or regulated, it should focus on the content, not the mechanism by which the content is manipulated. In this regard, focusing on generative AI is somewhat myopic, as it allows so-called cheap fakes — those not using AI — and emerging technologies to slip through the cracks.

While there are going to be clear-cut examples of manipulated content where a candidate is made to say something they never did, there are also going to be more subtle cases. In a recent Ron DeSantis ad attacking former President Trump, for example, we hear Trump’s voice criticizing Iowa Gov. Kim Reynolds (R). Although the text appears to be based on a post by Trump on Truth Social, the voice was AI-generated. The addition of the voice is clearly more effective and damaging to Trump, but is it deceptive? The FEC will need to consider such less obvious questions.

In another example, shortly after President Joe Biden announced his bid for reelection, the Republican National Committee released a video featuring a series of AI-generated images. The video imagines “what if crime worsens,” “what if international tensions escalate” and “what if financial systems crumble,” each with an associated fake image showing an apocalyptic future. The FEC will need to consider the use of generative AI for everything from this type of ominous speculation to more obvious impersonation.

Although we have already seen campaigns use generative AI to attack their competitors, this same technology can also be used to bolster one’s own candidate. A photo of a candidate at a rally can, for example, be manipulated to make the crowd look larger, or a video of a candidate can be modified to make him sound and look more commanding. The FEC will need to consider whether and how to regulate a campaign’s use of manipulated media in the service of its own candidate.

Manipulating the photographic record is only the first step in spreading lies. Without traditional and social media outlets, these lies are less likely to see widespread distribution. We should also, therefore, consider how traditional and social media outlets allow, encourage and amplify lies and conspiracy theories and, where appropriate, hold them accountable.

The landscape of photographic manipulation is long and varied and will continue to evolve beyond today’s generative AI. The FEC is right to consider if and how to regulate deceptive political ads.

The issue, however, is not fundamentally one of AI or technology, but of the standards to which we want to hold our current and future leaders. To this end, it seems eminently reasonable to insist that our politicians be truthful.

Hany Farid is a professor in the department of electrical engineering and computer sciences and the School of Information at UC Berkeley.

Tags 2024 presidential election Artificial intelligence Deepfakes Donald Trump Federal Election Commission Jane Fonda Joe Biden John Kerry Joseph Stalin Kim Reynolds Nancy Pelosi political advertisements Ron DeSantis Social media

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Campaign News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video