The views expressed by contributors are their own and not the view of The Hill

Why new digital identity guidelines are needed now


The Government Accountability Office (GAO) recently issued a new report calling for federal agencies to strengthen their online identity verification processes. The report was written per a Congressional directive following the Equifax breach two years ago. The purpose of the study was to describe federal practices for remote identity proofing and the effectiveness and risks associated with those practices. The study was done from Nov. 2017 to May 2019 and the outcome has important implications not just for the U.S. Government, but the private sector as well. 

While the Equifax breach was the impetus for the report, the number of data breaches before and after continues unabated. In fact, according to the Identity Theft Resource Center, while the total number of data breaches last year was down by 23 percent, the total number of consumer PII (personally identifiable information) records exposed was up by a whopping 126 percent!

What this means, and what the GAO report correctly points out, is that the legacy method of identity proofing, known as KBA or Knowledge-Based Authentication, which relies on asking applicants seeking benefits or wanting to open an online account questions derived from information found in their credit files, is completely outdated and ineffective. Given the data breaches, it is impossible to assume that only a legitimate person would know the answers. One of the most troubling findings of the report though, is that even though most government agencies are aware that KBA is not reliable, they still rely on this technique mainly because guidelines on the use of alternatives are not well-defined.

Beyond calling for NIST (National Institute of Standards and Technology) and OMB (Office of Management and Budget) to issue new guidelines, the GAO report discusses some available alternatives that that can provide stronger security, but acknowledges that they all have their limitations. For example, verification of location and device and sending SMS codes are mentioned as alternative options, but one well-established technique of fraudsters is to manipulate or “spoof” phone numbers and redirect phone calls and SMS confirmation codes. Fraudsters are also able to take over existing accounts and change the associated phone numbers and email address. So analyzing location or device data alone will not suffice, never mind that people change locations and devices often enough that they leave many blind spots in the determination of one’s identity. Other alternatives such as sending PIN codes by snail mail and verifying documents remotely also have their limitations.

Evidence from the private sector shows that leveraging emerging capabilities that analyze online user behavior can help fill in the gap. It turns out that user behavior is in fact an untapped goldmine that can reveal the use of stolen and synthetic identity in the online application process. Through the use of artificial intelligence, the technology analyzes various cognitive attributes associated with data familiarity, application fluency and computer proficiency. Fraudsters will go through the application quickly, suggesting they have gone through the process many times before, while making mistakes that suggest the information they are entering does not belong to them. Legitimate users tend to do the opposite. Beyond improving the rates of fraud detection, the technology has also shown the ability to reduce the amount of cases that get sent to manual review, important for operational efficiency and “customer satisfaction.”

Behavior of course does not stand alone. It is part of a redefined digital identity that includes location, device, online profiles, historical patterns of online activity, involving multiple perspectives on who you are, what you know and what you have, the main cornerstones to strong customer authentication and identity verification. Behavior becomes a key part of this updated digital identity framework, which incorporates all these elements into a risk-based, deep learning model that will evolve and get refined over time.

As the Identity Theft Center 2018 data breach report aptly states, “The time has come for all of us — advocates, decision makers, and industry — to develop and use technology to our advantage and create systemic change. Thieves upgrade, update, communicate and leverage technology to perpetrate their schemes — why aren’t we?”

Frances Zelazny is Chief Marketing & Strategy Officer of BioCatch, a cybersecurity company that delivers behavioral biometrics to protect users and data. She provided testimony to the New York State Assembly’s banking committee in 2017 on cybersecurity threats facing the U.S. financial industry.