iPhone X’s facial recognition is cool, but it’s not the future of technology
Earlier this month, Apple unveiled its new iPhone X and the world watched in awe as senior vice president Craig Federighi accessed the device using facial recognition by simply looking at it. Rest assured, this is a big deal, but what Apple missed that this kind of facial recognition is already possible without all of the high-technology sensors in the iPhone X’s “TrueDepth” camera system. In fact, it can be done with a very low-cost webcam and some advanced machine-learning algorithms.
Apple and its engineers should be applauded for their amazing work over the past couple years, particularly in presenting biometric authentication as a positive, pleasant experience. But the path to widespread adoption of facial recognition technology will not involve replacing existing cameras with state of the art 3D sensors or dot projectors. Updating our nation’s surveillance cameras with 3D sensors would require billions of dollars and years to implement. Instead, the path forward will involve advancing machine learning and artificial intelligence (AI) algorithms that can be applied to images produced by cameras that are already deployed.
{mosads}Research labs across the nation are using low-cost, existing surveillance cameras to achieve facial recognition. In my lab at Carnegie Mellon University, we are using low-cost consumer cameras — representative of what’s deployed in real world surveillance systems — to capture video, and using advanced machine learning to analyze that video to match subjects to enrollment images at any pose in milliseconds and in real time. We do this all using standard consumer-grade microprocessors.
But if we’re going to make widespread deployment and adoption of facial recognition as quick and smooth as possible, the biometrics community needs to focus on three objectives. First, break down the process. The current trend in use of AI for facial recognition is to train “black box” deep neural networks to “learn” trends in facial data, but we cannot explain why they fail when they do (and they often do). We need to focus on breaking the facial recognition process down into discrete, understandable components. Success will involve full transparency of the process and not blind faith that a single “black box” deep neural network will work correctly.
Second, work with existing technology. We need to continue to focus on 3D modeling 2D images, like those produced by the current fleet of cameras in the world. These 3D models need to be quick, light and able to run on the processing power of any consumer-grade smart device. Third, remove the stigma. We need to continue to work to break apart the stigma attached to facial recognition. A world that uses facial recognition does not look like Hollywood’s Minority Report. It looks like a smarter, more pleasant experience interacting with complex computer security systems to help make a safer world for our friends, our families and our children. Apple has made great progress on this. The momentum needs to keep up.
Getting facial recognition right has never been more important. As facial recognition makes its way to our phones, it’s beginning to take off in other circles across our nation. The industry is helping implement facial recognition technologies to speed up security lines at airports as well as border control authorities in several countries, including the United Kingdom, Australia and Brazil. Making facial recognition technology mainstream has been a long time coming. And while Apple’s new iPhone X is a positive move towards getting people acquainted with the technology, widespread adoption of the technology itself will require an AI approach that’s more in line with existing cameras and systems.
Marios Savvides is director of the CyLab Biometrics Center and professor of electrical and computer engineering at Carnegie Mellon University.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..