To defend democracy, we must protect truth online
In his inaugural address, President Biden called on the country to “reject a culture in which facts themselves are manipulated and even manufactured.” We have every reason to believe the troubling trend of disinformation to deceive will continue with regard to online content.
Visual deception and disinformation are powerful. The use of even rudimentary image and video manipulation, known as “cheapfakes,” has been proven to increase perceived truth and is a common tool for fueling online visual disinformation. The advent of synthetic media that is manipulated or wholly generated by artificial intelligence (AI), commonly referred to as “deepfakes,” makes the dangers of such distortions more significant.
Deepfake videos are getting better and becoming virtually undetectable by forensic mechanisms. Though the most popularly shared deepfakes have been benign, there have been several real-world examples of malicious use. The use of deepfakes in creating non-consensual pornography underscores how easily they can be weaponized. Last month the FBI warned of deepfakes as a growing threat to private industry and in terms of “emulation of existing employees.” This week, it was alleged that deepfakes were used to emulate Russian opposition members to deceive various European Members of Parliament.
Legislators across the country have acted with speed to pass new laws to address this threat. At least five states have adopted laws to ban deepfakes in some contexts. At the federal level, the U.S. Congress passed several laws in quick succession on deepfakes, including the Deepfake Report Act, sponsored by Sen. Rob Portman (R-Ohio).
These are important steps, but we should not focus solely on reactive countermeasures — penalizing propagators of false media or trying to detect forgeries after the fact. We need technologies that can help us prove what is true rather than detect what is fake. One way to do this may be through media provenance, which can help build a trusted information ecosystem.
This approach allows the fingerprinting of digital content such as videos and photographs at the point of capture, so they cannot be altered or modified covertly after the fact. It would allow a viewer to authenticate the photo or video and confirm that it has not been changed maliciously. For the person capturing the image, the approach is opt-in and only enacted if the creator wants to assert rights or provide transparent information about the content. If the creator does opt-in, the media with provenance information can be displayed with a label, akin to the lock in the upper corner of most secure browsers, that a viewer can click on to learn information about the media creation and critical associated metadata such as the time, date, location of its capture and whether any modifications were made since the image was taken. This information is cryptographically sealed in a file and cannot be altered.
The image provenance technology option has recently been engineered directly into smartphones, which democratizes the ability to capture trustworthy media at scale. Perhaps most importantly, a group of influential technology and media companies announced the formation of the Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project established to address the prevalence of disinformation, manipulated media, and online content fraud through developing technical standards for certifying the provenance of digital content. Its founding members include Adobe, Arm, BBC, Intel, Microsoft, and Truepic.
The standardization of media provenance technology can be a game-changing development, much like when Secure Sockets Layer (SSL) technology became standardized in the 1990s and opened the possibilities of trusted digital commerce online.
Despite its promise, provenance has its challenges. A broad coalition of platforms, browsers, and device manufacturers will need to engage and eventually adopt the standard as an option for users.
Congress can play an important role too. There has already been bipartisan interest in the potential of provenance media. An assessment of provenance technology will be part of the Deepfake Report Act’s scope of work on available countermeasures to synthetic media, and the provenance of digital media was also referred to in last year’s IOGAN act. More recently, the National Security Commission on Artificial Intelligence included provenance technology in several of its recommendations to Congress and the White House on combating malicious AI.
This is a promising start, but much more must be done to advance the approach and technology. Online platforms, legislators, academics, and others must further engage through emerging standards bodies and consortiums like C2PA to help provenance reach widespread scale and adoption.
Rejecting a culture in which there is no truth and facts are manipulated and manufactured is a larger social issue, but technology solutions like media provenance can go a long way toward renewing a shared sense of reality online and off.
Mounir Ibrahim is the Vice President of Strategic Initiatives for Truepic, a leader in provenance media capture.
Ashish Jaiman is the Director of Technology and Operations in the Defending Democracy Program at Microsoft focusing on disinformation and deepfakes defense.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..