FCC to consider disclosures for AI-generated political ads

The Federal Communications Commission (FCC) introduced a measure Wednesday that would require political advertisements to disclose the use of artificial intelligence (AI) software, in what could be the federal government’s first foray into regulating the use of the technology in politics.

If adopted by the full commission, advertisers on broadcast television, radio and cable would be required to reveal the use of AI technology for voice and image generation, amid concerns that the rapidly advancing tech could be used to mislead voters as the 2024 election draws closer.

“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a statement Wednesday. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

Rosenworcel’s proposal would not completely ban the use of AI in political advertisements. The rule would apply to both candidate and issue ads, if adopted, according to a filing. It would not apply to online ads or those shown on streaming services.

The proposal specifically notes the risk of “deepfakes,” AI-generated images and audio meant to mimic a real person. AI skeptics have warned that these digitally created images and audio could trick voters into believing a candidate did or said something that they really did not.

The use of deepfake voice technology was already banned by the FCC for use in political robocalls earlier this year, after a group impersonated President Biden in an attempt to discourage voter turnout in the New Hampshire primary.

Details of what would be expected in the AI disclosures were not made clear, with the details left up to the commission’s rulemaking process. The FCC would also have to write a specific definition for AI content, a task that has already slowed regulatory attempts.

AI is “supercharging” threats to the election system, technology policy strategist Nicole Schneidman told The Hill in March. “Disinformation, voter suppression — what generative AI is really doing is making it more efficient to be able to execute such threats.”

AI-generated political ads have already broken into the space with the 2024 election. Last year, the Republican National Committee released an entirely AI-generated ad meant to show a dystopian future under a second Biden administration. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic.

In India’s elections, recent AI-generated videos misrepresenting Bollywood stars as criticizing the prime minister exemplify a trend tech experts say is cropping up in democratic elections around the world.

Sens. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) also introduced a bill earlier this year that would require similar disclosures when AI is used in political advertisements.

The Associated Press contributed.

Tags Amy Klobuchar Artificial Intelligence deepfake FCC Jessica Rosenworcel Joe Biden Lisa Murkowski political ads

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video