The views expressed by contributors are their own and not the view of The Hill

Blanket ban on facial-recognition tool makes us all less safe


According to San Francisco, you can’t trust the police. Apparently, Microsoft agrees. Powerful technologies like artificial intelligence and facial recognition shouldn’t be in the hands of U.S. law enforcement — the potential for abuse is too high, according to Microsoft President Brad Smith. Smith has concerns for blanket surveillance, autonomous weapons and other irreversible courses of action.

Maybe Smith should talk to the other Microsoft that is actively collaborating with China’s National University of Defense Technology (NUDT). China uses facial recognition to monitor and score citizen behavior. According to an April 2019 news report, “Microsoft has been collaborating with researchers linked to a Chinese military-backed university on artificial intelligence, elevating concerns that U.S. firms are contributing to China’s high-tech surveillance and censorship apparatus.”

It appears Microsoft also forgot about the memo from the Chinese government requiring that all companies operating in China are expected to share their research and technology with the People’s Liberation Army (PLA) under a concept called “civil-military fusion.” Bob Work, the former Deputy Secretary of Defense, has warned of this repeatedly. If it’s good enough for our biggest strategic adversary, it should be good enough for the actual good guys.

{mosads}While there is no facial recognition currently running in San Francisco, they want to ban it anyway. The proposal states that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.” The proposal doesn’t end there. It also includes “license plate readers, toll readers, closed-circuit cameras, body-worn cameras and biometrics technology and software for forecasting criminal activity.”

This isn’t just a pause in deploying cutting-edge technology, this is an outright ban. While there are always valid concerns about any technology, many of the concerns around the use of artificial intelligence draw the wrong conclusion. Had we applied this “logic” to other controversial technologies, we might never had been able to use DNA to identify the guilty and clear those wrongfully convicted. The Innocence Project, co-founded by O.J. Simpson attorney Barry Scheck, uses DNA to “exonerate” the innocent.

Another fundamental misconception is that the use of facial recognition alone triggers an arrest. Nothing could be further from the truth. The same rules apply whether it’s stopping a suspected stolen vehicle, arresting a wanted fugitive or detaining someone suspected of criminal activity. During my 18 years in law enforcement as a police officer, state trooper and detective, I made hundreds of arrests based on probable cause.

But here’s the difference. Self-appointed legal experts will claim that police had no “probable cause” to pull over a vehicle or detain someone suspected of a crime. They don’t need probable cause. The police only need reasonable suspicion. Here’s why this distinction matters and how it applies to AI and facial recognition.

A state trooper is behind a vehicle and checks the license plate to see if the vehicle has been entered in the National Crime Information Center (NCIC) that is maintained by the FBI’s Criminal Justice Information Services division (CJIS). NCIC is a clearinghouse for all types of criminal justice information including wanted persons, stolen vehicles and guns, missing persons and terrorist identification.

NCIC records show the vehicle has been reported stolen — a “hit.” The trooper now has reasonable suspicion to stop the car and detain the occupants. Before an arrest can be made, the agency that entered the stolen vehicle has to be contacted and confirmation must be received that, in fact, the stolen vehicle record is still active.

If an affirmative response is received, the trooper can now make an arrest based on probable cause. If not, it’s time to offer up your sincere apologies, make sure no one in the car is wanted, and let the occupants go along their way. This includes potentially wanted persons as well. I’ve been in both situations numerous times. The NCIC guidelines and training makes it explicitly clear; an NCIC “hit” alone is not probable cause for an arrest. A second step to confirm is always required.

Let’s apply this to facial recognition. Oregon’s Washington County Sheriff’s Office used Amazon’s “Rekognition” program in 2017 to identify a possible suspect in a theft from a hardware store. But the possible match in and of itself isn’t enough to make an arrest. According to a recent article, “Sheriff’s officials say face scans don’t always mark the end of the investigation: Deputies must still establish probable cause or find evidence before charging a suspect with a crime.”

{mossecondads}There has been misidentification of persons using facial recognition, especially in minorities. The bias that has been introduced is not a fault of the technology. It’s a fault of the people programming the technology. There should be a vigorous debate about how to use any new technology that integrates machine learning and artificial intelligence. But let’s step back and see what the real impact could be if we ban the use of modern technology wholesale, forever.

Ask a parent how fast they would like the police to locate their kidnapped child. Would they prefer the police bring potential witnesses into the station and have them review mug photos? Who’s to say the kidnapper is even in the local mug photos? What about a missing child? Would it be preferable to have the police review hundreds of hours of video manually over several days to locate the one frame with a key piece of evidence? In San Francisco, you wouldn’t even have the cameras.

During the Boston Marathon bombing on April 15, 2013, Dzhokhar and Tamerlan Tsarnaev detonated two pressure cooker bombs that killed three at the scene and injured hundreds of others. It took three days for the FBI and law enforcement to manually comb through thousands of photos and hundreds of hours of video looking for vital clues. 

After the Tsarnaev brothers were identified, the ensuing manhunt resulted in the death of MIT police officer Sean Collier and later Tamerlan Tsarnaev. Would it be worth the life of a police officer to have used facial recognition to more quickly locate the terrorists? Would it have been worth it to keep an entire city and region free from fear and terror and future attacks?

It took five days in July 2005 to identify the terrorists in the London bombings that killed 52 and injured nearly 800 people. I knew two of the detectives assigned to the anti-terrorism branch SO-15 at New Scotland Yard. Imagine all the camera footage that had to be manually reviewed — over 6,000 hours. Ask Londoners how long they wanted to wait to find out if there were any more terrorist attacks being planned.

Facial recognition “matching” is being used effectively at airports to compare passport photos against the actual traveler. Several arrests have been made of persons no one would want in their country. The significant difference in this program is that the photo on the passport is being used and not a vast database of photos. Should we speed up or slow the process of securely vetting airline passengers entering our country? No one says the best part of a long trip is waiting in the Immigrations and Customs line.

From terrorism to missing persons to crimes against persons and property, the ability of facial recognition and AI to greatly improve the safety security of our communities and country shouldn’t be so easily thrown onto the trash heap of anti-police bias.

Morgan Wright is an expert on cybersecurity strategy, cyberterrorism, identity theft and privacy. He previously worked as a senior advisor in the U.S. State Department Antiterrorism Assistance Program and as senior law enforcement advisor for the 2012 Republican National Convention. Follow him on Twitter @morganwright_us.

Tags A.I. Artificial Intelligence AI in China facial recognition technology Law enforcement

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video