As facial recognition manufacturers grapple with wide-ranging restrictions and outright bans in cities and states across the country, the physical security industry is stepping up with advocacy and education, focusing on the ongoing improved performance of algorithms in the technology. Adding further credibility to the industry’s mission to prevent moratoriums on the technology is a new demographics study on facial recognition released late last year from the National Institute of Standards and Technology (NIST).
Results from the report, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects” (NISTIR 8280), are intended to inform policymakers and help software developers better understand the performance of their algorithms, according to a press release by the government agency. FRVT evaluates facial recognition algorithms submitted by industry and academic developers on their ability to perform different tasks. While NIST does not test finalized commercial products, the program revealed rapid development in the field as well as stark differences between high- and low-performing algorithms.
The NIST study evaluated a majority of manufacturers in the industry—189 software algorithms from 99 developers—focusing on how well each individual algorithm performed one of two different tasks among facial recognition’s most common applications.
The first task, confirming a photo matches a different photo of the same person in a database, is known as one-to-one matching and is commonly used for verification work, such as unlocking a smartphone or checking a passport.
The second, determining whether the person in the photo has a match in a database, is known as one-to-many matching and can be used for identification of a person of interest.
What sets the NIST documentation apart from other research is its concern with each algorithm’s performance when considering demographic factors. For one-to-one matching, only a few previous studies explore demographic effects; for one-to-many matching, none have.
The NIST report also found that demographic differentials are lessening due to many high-performing algorithms producing fewer errors. The report further emphasized that many facial recognition use-case scenarios require trained humans as an integral part of the process. It summarized: “Whether in an investigation of a potential crime or identifying an individual at a port of entry, trained personnel are critical to the successful deployment of this technology.”
Responding to the NIST study and using it as a positive springboard for the technology’s implementation as a critical and indispensable tool for criminal investigations, physical security, fraud and automating identification processes, the Security Industry Association (SIA), Silver Spring, Md., said the most significant takeaway from the NIST report is that it confirms current facial recognition technology performs far more effectively across racial and other demographic groups than had been widely reported and that overall, modern facial recognition technology is highly accurate, according to Jake Parker, SIA Senior Director of Government Relations.
On March 10, Parker testified before the California State Assembly’s Committee on Privacy and Consumer Protection—the first ever informational committee hearing on the technology in the state—discussing current and future applications in government and commercial settings. The videotaped hearing can be found here.
“NIST has found that the facial recognition software it tests is now more than 20 times more accurate than it was just a few years ago in retrieving a matching photo from a database, and its report found close to perfect performance by high-quality algorithms with miss rates averaging just 0.1%. This reaches the accuracy of automated fingerprint comparison, which is viewed as the gold standard for identification,” Parker said.
According to Parker, the benefits of facial recognition are not potential or hypothetical. They are proven and growing. “In the public sector for example it has been used for over a decade to improve the speed and accuracy of criminal investigations. In any process where there are potential high consequence outcomes, the technology serves as a tool to assist personnel. This may explain U.S. law enforcement’s decade-plus operating history in many thousands of instances, without any confirmed example of the technology resulting in a mistaken arrest or imprisonment.”
In light of the NIST report, SIA encourages its members and facial recognition technology companies to strive to eliminate bias from within facial recognition processing algorithms and encourages such firms to enlist diverse data sets when testing their algorithms.
“The NIST study provides clear data that can help shape advances in facial recognition,” said Don Erickson, CEO of SIA. “SIA encourages collaborative efforts by member companies and involving key stakeholders with the goal of improving facial recognition algorithms and eliminating significant accuracy variation or potential bias.”
The primary concern of biometrics is that it potentially violates privacy and civil liberties, misidentifying or profiling people of color or by ethnicity, as reported earlier on ECmag.com. Oakland and San Francisco banned the use of facial recognition software by police and other government agencies. But in the recent committee hearing, Darryl Lucien, Managing Partner of Lucien Partners and representative for the Los Angeles Police Protective League said facial recognition provides a necessary tool in making criminal investigations more accurate.
Parker commented that development of any policy regarding the use of facial recognition must take a use-case and risk-based specific approach, factoring in privacy for the specific application.
“A blanket ban often is done without a full understanding of the proven, positive effects of the technology and its broad range of uses. A rush to implement one-size-fits-all rules could result in unintended consequences if the full range of uses are not considered. While any technology has the potential for misuse, we believe facial recognition should only be used for purposes that are ethical and non-discriminatory, and consistent with our Constitutional framework of laws and regulations,” Parker said.