The views expressed by contributors are their own and not the view of The Hill

AI gets political: How do we keep fake news out of campaign ads?

Deceptive political advertisements are nothing new. But, as one news organization recently reported, generative artificial intelligence (AI) is launching “a new era where it could become even harder for voters to discern truth from lies.”

A June 5 Wall Street Journal article concurred, asserting that sophisticated, AI-generated videos and images pose “a major threat to political campaigns as 2024 contests get under way.”

For example, Forbes reported in late May that a super-PAC supporting Republican presidential candidate and Florida governor Ron DeSantis created a video using AI that falsely depicted “a group of fighter jets appear[ing] to fly overhead as DeSantis speaks” to a crowd. Alex Thompson, debunking the fake flyover in Axios, called it “the latest instance of political ads including digitally altered videos to promote or attack candidates, making it difficult for viewers to discern what’s real.”

So, what can be done to address political deepfakes?

There is no magic bullet. However, an opportunity exists in our polarized journalistic environment for news organizations across the political spectrum –– from, among others, the perceived right-leaning Wall Street Journal and Fox News channel to the perceived left-leaning New York Times and MSNBC, to the perhaps more down-the-middle legacy broadcast networks (ABC, CBS and NBC) –– to unite on this one key issue.


Working together by each assigning journalists to a dedicated inter-organization team, they should jointly hold candidates of all political stripes accountable in the court of public opinion for deceptive, AI-generated images and videos. Importantly, a side benefit of such a unified, collaborative front here is that it might restore a little bit of the public’s diminishing trust in journalism, even getting people to move out of their news-source silos and gain confidence in journalism more broadly.

The typical reaction whenever a new technological problem arises, however, is to adopt a law. Indeed, last month Rep. Yvette Clarke (D-N.Y.) introduced a bill that would amend the Federal Election Campaign Act of 1971 to require “clear and conspicuous” disclosures (called disclaimers) in campaign ads that contain “an image or video footage which was generated in whole or in part with the use of artificial intelligence.”

Known as the REAL Political Advertisements Act (the first part being a strained acronym for “Require the Exposure of AI-Led”), the bill targets Clarke’s concern that “if AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.”

Disclosures and disclaimers certainly are steps in addressing deceptive, AI-generated campaign ads, but other laws would likely run into constitutional problems. That’s because the First Amendment to the U.S. Constitution generally provides a formidable bulwark against the government banning candidates’ false advertisements. As First Amendment attorney Lata Nott bluntly wrote in 2020, “lying in political advertisements is […] perfectly legal,” partly because “political ads are considered political speech, and First Amendment law protects political speech above all other types of speech.” 

Furthermore, the U.S. Supreme Court explained in 2012 that it “has never endorsed the categorical rule […] that false statements receive no First Amendment protection.” The court reasoned that false speech must cause some “legally cognizable harm,” such as a libelous statement causing reputational injury, for it to fall outside of First Amendment protection.

While false campaign ads may harm democracy, that’s not the kind of tangible, easy-to-pin-down harm that typically justifies punishing false speech, such as the financial injury suffered by consumers who are lied to by salespeople to get them to buy products that salespeople know are defective and break after purchase.

On top of this, a federal statute bars over-the-air broadcast TV stations from willfully and repeatedly denying “reasonable access” to legally qualified candidates running for federal office to air campaign ads. Additionally, the Federal Communications Commission explains that it generally does not “review or pre-approve the content of political ads before they are broadcast” or “ensure the accuracy of statements that are made by candidates and issue advertisers.”

What can journalists do? One thing is fact checking, which already exists. For example, the Florida-based Poynter Institute (owner of the Tampa Bay Times) houses PolitiFact, which Poynter describes as “the largest political fact-checking news organization in the United States.” In the AI age, fact checking visual imagery is just as important as checking the veracity of a candidate’s verbal assertions.

My proposal goes beyond news organizations doing their own siloed fact checking in three ways.

First, a broad-based coalition of entities (as described earlier) would draft a statement collectively asking all legally qualified candidates in major races to sign a proclamation that their campaigns will not run ads that use AI to generate images or information that might reasonably deceive viewers or readers.

Second, the organizations all would prominently and repeatedly report –– on a continually updated basis –– the names of all candidates who refused to sign and those who did sign.

Third, a fact-checking team comprised of journalists from all the entities would work collaboratively on deciphering deceptive visuals and, in turn, each would report the same jointly authored story publicly exposing their agreed-upon findings. The story would highlight the names of all the news organizations, stressing their agreement on the findings. 

Such a voluntary, joint-journalistic effort is not a cure-all, even assuming the entities all participated. Ultimately, it is up to voters to hold accountable at the ballot box candidates who engage in deception. But such an approach of unified news organizations to this pressing political problem may help concerned voters while simultaneously bolstering journalistic credibility as a whole.

Clay Calvert, J.D., Ph.D. is professor emeritus at the University of Florida (UF), where he held a joint appointment as a professor of law at the Fredric G. Levin College of Law and as a Brechner Eminent Scholar in Mass Communication in the College of Journalism and Communications. Specializing in First Amendment and media law, Calvert is a nonresident senior fellow at the American Enterprise Institute.