The views expressed by contributors are their own and not the view of The Hill

From disinformation to deepfakes: The evolving fight to protect democracy

Facial mapping technology used on a deep fake.
The Associated Press
A 2018 photo shows deepfake facial mapping technology used on a photo of former President Barack Obama.

In August 2016, the CIA director called his director of congressional affairs with an unusual instruction: He needed to speak in a secure setting with each member of the “Gang of Eight” — the bipartisan leadership of the House and Senate and the congressional intelligence committees. Each could bring a top adviser. 

With Congress in recess and many members traveling, without access to secure communications, most of the meetings had to wait until early September. But one meeting took place in August.  

The two of us remember the late-summer meeting well because, as the CIA’s director of congressional affairs and the minority staff director at the House Permanent Select Committee on Intelligence, we were the only other people in the room. 

Seated at the conference table in his wood-paneled office, the CIA director revealed what has since been declassified: Sensitive intelligence revealed that Russian President Vladimir Putin had ordered his intelligence services to conduct a multifaceted influence campaign to interfere with the 2016 U.S. presidential election. The campaign included the selective leaking of stolen data and the amplification of certain stories, operating through cut-outs, trolls and Russian state media. 

Eight years later, with that memory looming large, we can only imagine the chaos and confusion that would have enveloped the election had the Russian Intelligence services had access to the generative artificial intelligence tools that now exist at the fingertips of an average user. 

Today, however, voters are beginning to encounter AI-generated false information — and the threat is not confined to any particular nation. As generative AI tools become widely available, the greatest threats to democracy are themselves increasingly democratizing, no longer solely the province of sophisticated nation-state actors. 

The day before this year’s New Hampshire primary, some residents received a robocall with what sounded like President Biden’s voice telling them not to bother showing up at the polls. Forensics experts quickly traced the call back to an AI technology developed by a New York-based start-up, which then suspended the responsible account. 

Last week, the Federal Communications Commission took action, voting to ban robocalls that use AI-generated deepfake voices and to give state prosecutors more tools to pursue those responsible. But the risks of misleading AI-generated content aren’t limited to robocalls, nor is it only an American problem. 

Days after the New Hampshire primary, with her Eras Tour heading to Asia and her boyfriend’s team headed to the Super Bowl, music icon Taylor Swift found herself the victim of another AI-generated deepfake campaign, this one spreading explicit images through social media. Responsible platforms removed the images, but not before they had been widely circulated and condemned even from the White House podium. 

Shortly after, two new deepfake videos of Swift emerged: One altering her speech at the Grammy Awards to include a partisan political message, the other purporting to show her on the red carpet with a banner endorsing a presidential candidate. While the videos were quite clearly fake and widely flagged as “manipulated media,” that warning didn’t always persist through sharing and reposting. 

The risk of AI-generated deepfakes does not just threaten the world’s oldest continuous democracy, but democracies across the globe. Ahead of the recent national election in Slovakia, viral audio recordings appeared to reveal a pro-NATO candidate bragging about how he planned to rig the election — and, perhaps more damning, to raise the price of beer. They too were later determined to be the product of generative AI. 

What once seemed like a clichéd plot point in a Hollywood action movie has become a daily reality, with implications that we are only beginning to grasp. And the more powerful generative AI systems become, the harder it will be to tell what’s real. 

The rise of easily accessible generative AI tools coincides with 2024 elections in not just the United States, but India and the United Kingdom as well. Almost half the world’s population goes to the polls this year alone. Voters could soon find themselves struggling to tell whether the videos, photos and audio recordings they encounter are real or part of an AI-fueled propaganda campaign to sway their votes — or undermine their confidence in democracy altogether. 

Below the level of nation-states, companies of all sizes may soon find themselves struggling to guard their businesses against much more sophisticated phishing e-mails and AI-fueled extortion plots. They will also face new threats to the security of their networks and their intellectual property, including from their own adoption of AI solutions. 

Individual citizens are also at risk from AI-generated extortion, including so-called revenge pornography, and from efforts to trick them into revealing passwords, bank account numbers and other sensitive personal and financial information. 

Generative AI promises to revolutionize huge swaths of the global economy, yielding productivity gains and improvements in the quality of life for millions. But that promise could be overshadowed, at least temporarily, if we don’t act now to educate the public about the realities of generative AI and to take steps to prevent its worst abuses. 

With immediate congressional action unlikely, there’s still much that companies and individuals can do.  Private enterprises of all sizes should focus on four pillars for the responsible use of AI: rules to govern the use of AI systems, continuous monitoring and assessment of those systems, the privacy and security of any data input or generated and the proper allocation of risks and responsibilities when using third-party solutions. 

Companies that develop AI solutions should further explore measures to ensure responsible use, including digitally watermarking AI-generated content and determining whether to restrict prompts that might generate abusive content. Some of the world’s largest tech companies have announced new measures they are taking to alert users to images produced by AI. Companies adopting generative AI solutions developed by third parties can similarly explore ways to authenticate and verify documents, audio and video. 

As individual citizens, we can all learn to think like intelligence analysts, stopping to consider whether the video, audio or images we encounter on social media and elsewhere are really authentic. And we can demand that political leaders from all parties refrain from exploiting anyone’s inappropriate use of deepfake technologies.  

If we have to wait for another late-summer, wood-paneled intelligence briefing, it will likely be too late.

Neal Higgins is a partner in Eversheds Sutherland’s global Cybersecurity and Data Privacy practice. He previously served at the White House as the first deputy national Cyber Director for National Cybersecurity, as associate deputy director of CIA for Digital Innovation and as the CIA’s director of congressional affairs.

Michael Bahar is co-lead of Eversheds Sutherland’s global Cybersecurity and Data Privacy practice. He has previously served as a deputy legal advisor to the National Security Council at the White House, minority staff director and general counsel for the U.S. House Intelligence Committee, and as a former active-duty Navy JAG . 

Rachel Reid, partner and U.S. head of artificial intelligence at Eversheds Sutherland contributed to this article.

Tags 2016 Russian election interference Deepfakes generative AI Joe Biden Politics of the United States robocalls Taylor Swift taylor swift deepfake Vladimir Putin

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴
Main Area Middle ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more

Video

See all Video