The views expressed by contributors are their own and not the view of The Hill

Congress’s deepening interest in deepfakes

Congress is closing the year by taking significant yet unheralded early steps to legislate on “deepfakes,” false yet highly realistic artificial intelligence (AI)-created media — like a recent satirical video of President Trump appearing to read a Christmas story.

In quick succession in December, Congress sent two bills to the president, the National Defense Authorization (NDAA) for FY 2021 and the IOGAN Act. They would require, respectively, the Department of Homeland Security (DHS), the Department of Defense (DOD), and the National Science Foundation (NSF) to issue reports on and bolster research into deepfakes, which are sometimes known by other names like “machine-manipulated media,” “synthetic media,” or “digital content forgeries.” These bills ask for recommendations that could lay the predicate for federal regulations of such media. 

The NDAA, which Congress is soon expected to pass over the President’s Dec. 23 veto, would direct the Secretary of Homeland Security to issue a yearly report for five years on “digital content forgeries.” Unlike the 2020 NDAA, which required the Director of National Intelligence to report only on the weaponization of deepfakes by foreign states or proxies to undermine U.S. national security interests, this law would direct DHS to broaden the aperture and study not just how foreign governments use deepfakes to harm national security but the broad range of dangers posed by such false media. This includes how deepfakes are used to “commit fraud,” harm “vulnerable groups,” or violate civil rights laws — an apparent reference to the rising epidemic of nonconsensual deepfake pornography, wherein a non-consenting woman’s face is placed on a nude body to create a realistic-looking pornographic image. 

The bill also calls for an analysis of technical countermeasures to deepfakes and of methods for detecting digital content forgeries, including “recommendations on how to identify and address suspect content” to warn users. The NDAA includes manipulated “text” in its definition of a “digital content forgery” — an important addition since AI-generated text can be used to manipulate social-media conversations and skew public notice-and-comment periods.

The NDAA also requires DOD to study the cyber-exploitation of members of the Armed Forces and their families. In addition to an assessment of the exploitation of military members’ personal information or their victimization by peddlers of predatory loans, unnecessary medical treatments, and violent extremism, it directs an intelligence assessment of the threat posed by foreign government and non-state actors creating or using deepfakes that feature military members and their families. It directs the Secretary of Defense to produce “[r]ecommendations for policy changes,” including “recommendations for legislative or administrative action” to reduce the vulnerability of the military and their families. This provision is notable given recent reports that a scammer used deepfakes to impersonate an admiral and con a California widow into sending him almost $300,000.

In December, Congress also sent the president the Identifying Outputs of Generative Adversarial Networks (IOGAN) Act. (A generative adversarial network, or GAN, is a technology used to create deepfakes.) This law, which the president signed on Dec. 23, requires the NSF to support research on “manipulated or synthesized content and information authenticity.” The law also directs the National Institute of Standards and Technology (NIST) to support research for the development of standards related to deepfakes. Finally, the law requires the directors of both organizations to report to Congress on opportunities to research deepfake detection with the private sector, “including digital media companies.” This report would include “any policy recommendations” to “facilitate and improve communication and coordination” among the private sector, NSF, and federal agencies “through the implementation of innovative approaches to detect” such content.

Congress’s actions are part of a broad trend across the country to regulate deepfakes, which many consider a menace to politics, privacy, business, and society’s shared conception of truth. Five states have already outlawed some deepfakes and about ten others are considering doing the same. Just last month, New York adopted a path-breaking law that establishes a postmortem property right for actors’ “digital replicas” and bars certain nonconsensual deepfakes.

We can expect the 117th Congress to continue these efforts. As it is, Congress is already considering other bills that, if passed, would drastically expand federal election law to prohibit certain deepfakes related to elections and impose criminal penalties for others. Growing concerns of convincing media fakery, the mainstreaming of conspiracies, and the seeming omnipresence of disinformation narratives pushed by foreign adversaries and domestic dissemblers may soon turn manipulated media into the latest field subject to federal regulation.

Matthew F. Ferraro, a former U.S. intelligence officer, is counsel at WilmerHale and a visiting fellow at the National Security Institute of George Mason University.

Tags Artificial intelligence Deepfake Deepfakes Donald Trump National Science Foundation Science and technology Synthetic media United States Intelligence Community

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Cybersecurity News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video