Garoetpos.com – The United States Government (US) claims face recognition technology contains racism. Nearly 200 face recognition algorithms have worse performance on non-white faces.
Based on testing from the US National Institute of Standards and Technology (NIST) of each algorithm on the two most common tasks for face recognition. The first match, known as “one-to-one” matching, involves matching someone’s photo with the same person’s photo in the database.
This is used to unlock smartphones or check passports, for example. The second search, known as a “one-to-many” search, involves determining whether a person’s photo has a match in the database. This is often used by the police department to identify suspects in an investigation.
The agency studied four face data sets currently used in US government applications: photos of people living in the US, application photos of people who apply for immigration benefits, application photos of people who apply for visas and photos of people when they cross the border to the US. In total, the data set included 18.27 million images from 8.49 million people.
NIST found several high-level results from this study :
1. For one-on-one matching, most systems have a higher rate of false positive matches for Asian and African-American faces above Caucasian faces, sometimes with a factor of 10 or even 100. In other words, they are more likely to find a match when not there is.
2. This changed for face recognition algorithms developed in Asian countries, which produced a slight false positive difference between Asian and Caucasian faces.
3. Algorithms developed in the US are consistently bad at matching faces of Asians, African-Americans, and Native Americans. Native Americans suffer from the highest false positive rate.
4. One-to-many matching, the system has the worst false positive rate for African-American women, which puts this population at highest risk for being accused of a crime.
The use of face recognition systems is growing rapidly in law enforcement, border control, and other applications throughout society.
While some previous academic studies have shown a popular commercial system to be biased in race and sex, the NIST study is the most comprehensive evaluation to date and confirms previous results. The findings question whether this system should continue to be widely used.
Now it depends on policy makers to find the best way to regulate this technology. NIST also urges face recognition developers to conduct more research on how this bias can be reduced. (*)