The views expressed by contributors are their own and not the view of The Hill

Saving face: Law enforcement must recognize pitfalls of facial recognition technology


Facial recognition technology enabled by artificial intelligence provides a powerful tool for international efforts against networked multinational terrorist groups. Domestically, however, valid concerns about weaponizing that tool for law enforcement purposes have developed into widespread pushback and loud calls for regulation. Facial recognition touches a particular emotional nerve, because the importance of reading the human face underpins our social interactions, and our faces are a primary marker of identity. 

As a result, AI technology that can identify people’s faces and understand facial expressions much more deeply infringes on a core social space than tracking online purchases or behavior does. Automation and deployment of this process by governments create deep and concerning issues related to trust and social values, and widespread use of facial recognition technology outside of a national security context may threaten our constitutional rights to racial and gender equality, freedom of speech and assembly, and the ability for the press to operate without worry of retribution.

Government officials participating in a July 10 congressional hearing on the Department of Homeland Security’s use of facial recognition downplayed concerns about civil liberties and privacy rights, along with worries about the integrity of security related to storage of biometric data. Often overlooked in the debate over the expanding use of emerging biometric technologies such as facial recognition is the interplay of public trust in social institutions, which is concerningly low. 

Anxieties about government surveillance, galvanized after the Edward Snowden revelations in 2013, still have not been sufficiently resolved, and the prospect of granting intelligence and security officials even more capacity to monitor without strong safeguards has further eroded public confidence.

Part of this often emotionally charged skepticism among the public toward this technology is linked to issues of accuracy and bias. While testimony by Dr. Charles H. Romine, director of the Information Technology Laboratory at the National Institute of Standards and Technology, revealed that the best facial recognition technology his lab has evaluated has a 99.7 percent accuracy rate during testing of a one-to-many match, he concedes that there is a wide range of performance across facial recognition technologies.

Further, accuracy rates touted by AI experts come with significant caveats. AI algorithms are only as good as the data they’re trained on, and many facial recognition systems are still trained and evaluated on data unrepresentative of real-world situations. In addition, researchers and academics are just beginning to understand how to mitigate bias in algorithms, a problem that complicates any standard measure of accuracy. 

In some cases — such as monitoring high-threat zones for terrorists — issues with accuracy and bias may be outweighed by facial recognition software’s increasingly strong capacity to filter out “true negative” matches. But in other cases, such as law enforcement action during nonviolent protests or immigration enforcement, the demographic biases latent in many of these algorithms may impede civil liberties and equality to an unacceptable extent. 

Within these circumstances, it’s unsurprising that citizens and advocacy groups are seeking aggressive restriction of this technology or supporting outright bans, as is the case in cities such as San Francisco and Somerville, Mass. A new campaign by the web-focused nonprofit Fight for the Future even argues for a national ban on facial recognition software.

Considering the technology’s current stage of development and its rapidly increasing use, we urge a pragmatic approach to regulation that avoids potentially counterproductive and binary responses to this complex policy problem. It is vital that regulators establish strong and flexible safety standards about algorithmic bias and use by officials. In the wake of revelations that government officials were mining databases from the Department of Motor Vehicles for license photographs, we also recommend imposing stringent limitations on law enforcement’s use of administrative databases for nonviolent crime enforcement. 

Finally, we believe that the United States should adopt certain aspects of Europe’s conception of privacy to establish guardrails that will help build public trust. Specifically, the European rights to data minimization and transparency, if applied to the U.S. government’s use of facial recognition technology, would help citizens feel more confident that this technology is being used narrowly and rarely. Strict data retention policies for all facial recognition imagery garnered and saved within these more robust guidelines would also assist in trust-building efforts while not outright preventing the government from using these algorithms.

While so much of the discourse on the application of facial recognition technology reduces the debate to a balancing act between security and privacy concerns, it is important to recognize that the potential commodification of images of our faces inevitably will increase the worrisome trends of social alienation and distrust. In attempting to make our societies more secure, we may unintentionally trigger more of the erosive dynamics fueling violent extremism and terrorism. 

We must be exceedingly careful about the bargains we make in the name of security and efficiency, and we should consider setting strong guardrails for our own actions. The United States should leverage innovative new technology with massive potential to increase public safety — but there must also be balanced regulations to define when, and to what extent, this technology can be used.

Alex Newhouse is a data analyst and researcher for the Center on Terrorism, Extremism and Counterterrorism (CTEC) at the Middlebury Institute of International Studies in Monterey, Calif. He specializes in the use of technology such as AI and social media by extremists and terrorists. Follow him on Twitter @AlexBNewhouse.

Kris McGuffie is the deputy director of CTEC, does research on counter-extremism and counterterrorism, and promotes education about the threats posed by extremist actors. Follow her on Twitter @KrisMcguffie and @CTECMIIS.

Tags Biometrics facial recognition Privacy Video surveillance

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴
Main Area Middle ↴
See all Hill.TV See all Video
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more

Video

See all Video