California’s deepfake ban can’t fool the deep protections of the First Amendment
It’s a tale at least as old as American democracy. In the midst of a heated election season, a biting ad, satirical pamphlet, parody song or viral video alarms the powers that be so much that they attempt to censor it.
The latest chapter in this long-running saga involves California Gov. Gavin Newsom’s crackdown on election deepfakes. X owner Elon Musk reposted an obvious parody campaign video on his platform, in which a deepfake of Vice President Kamala Harris mocks herself as a “deep state puppet” and “the ultimate diversity hire.”
“Manipulating a voice in an ‘ad’ like this one should be illegal,” Newsom declared, promising to ban the video. And that’s exactly what he did by signing into law Assembly Bill 2839 on Sept. 17.
The law bans any altered content — from four months before to two months after an election — if it is “materially deceptive” and falsely portrays a candidate saying or doing something that could harm his or her “reputation or electoral prospects.” The law also bans altered content that falsely depicts election officials, candidates, voting machines or ballots in a way likely to “undermine confidence” in the election. Government officials or members of the public can sue to get such content removed and win damages.
This type of censorship has dubious historical parallels. In 1798, President John Adams signed the Alien and Sedition Acts into law, prohibiting “false, scandalous, or malicious writing” that brought the president into “contempt or disrepute.” Using that law, his administration went after newspaper editor Matthew Lyon for publishing a letter criticizing Adams as having wild ambitions to crown himself king of a new American aristocracy, adding that he probably belonged in “a madhouse.”
Lyon’s arrest sparked a severe backlash, and free speech won the day. The wildly unpopular acts were later repealed or allowed to expire. The illiberal legacy of the acts remain a stain on Adams’s legacy to this day.
Newsom’s ban on parody is likely to suffer the same fate. After all, in both cases, a politician used the power of his office to go beyond targeting actually fraudulent or defamatory speech.
“Mr. Reagan,” the username of the creator of the Harris deepfake video, sued to block the law, arguing that it violates his First Amendment rights. And he’s right. On Oct. 2 — just two weeks after its enactment — a federal judge ruled that the law was indeed likely to violate the First Amendment and halted enforcement.
The court explained that the First Amendment protects political speech even when it is false or harmful. Indeed, “civil penalties for criticisms of the government…have no place in our system of governance.” Because the law attempted to punish speech based on whether it was true, the court blocked its enforcement pending further review. After all, in America, it is the marketplace of ideas rather than the government that is the arbiter of truth.
We do not trust government officials, however well-meaning, to decide what is true or false in politics. And our First Amendment rights certainly don’t change depending on how close we are to Election Day. If anything, they get stronger in the “crucial phase” leading up to an election.
California argued that content subject to the law is defamatory and therefore not protected by the First Amendment. But the court rejected this argument, explaining that the law “acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and . . . the free and unfettered exchange of ideas.” Likewise, the category of speech that could “undermine confidence” in the election is so broad as to potentially include just about anything government officials dislike.
Existing law is perfectly adequate to address deepfakes, which are just another form of manipulated imagery — something that’s been around since the dawn of photography. These are real problems, but the First Amendment already allows restriction around such acts, so there’s no need for overbroad regulations that seek to silence speech that’s merely satirical or hyperbolic.
Ultimately, California’s law failed First Amendment scrutiny because “counter speech” rather than censorship is the proper response to deepfakes, “no matter how offensive or inappropriate someone may find them.” That does not allow the state to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.”
The court also found that the law’s exemption for parody or satire, which requires prominently displaying a disclaimer, was likely unconstitutional. Such a disclaimer would make content like Mr. Reagan’s video unwatchable and “drown out” the message.
Indeed, the best satire is effective because it is believable up to a point — just ask readers of the Babylon Bee or the Onion. Or take, for instance, Jonathan Swift’s iconic essay “A Modest Proposal.” Swift’s essay is so effective because it mimics the voice of an entitled aristocrat to argue for eating children as a means to reduce poverty. A disclaimer would defeat the point.
California’s would-be censors have learned the hard way that censoring political speech during an election is unacceptable. The Constitution protects speech, even and perhaps particularly when it upsets politicians.
Censors may rage, but free speech will win the day.
Daniel Ortner is an attorney at Foundation for Individual Rights and Expression.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..