top of page

ACLU's test results of Amazon's Rekognition AI Facial Recognition Technology Debunked

By Clearview AI BLOG


In 2018, the American Civil Liberties Union (ACLU) conducted a test of Amazon's Rekognition facial recognition technology on the California State Assembly. The ACLU claimed that the technology had a high rate of false positives when comparing photos of every member of the California state legislature against a database of 25,000 public mugshots, and that it disproportionately misidentified people of color.


The test and its results were widely reported in the media and sparked a broader debate about the potential risks and benefits of facial recognition technology.


However, Amazon disputed the ACLU's methodology and findings, stating that the organization had used the technology improperly and had not followed best practices in setting up and using the system. Amazon also pointed out that the ACLU had used a low confidence threshold in its testing, which would have resulted in a higher rate of false positives.


To understand this determination, it's important to understand the two main applications of facial recognition technology: verification and identification.



VERIFICATION

In verification, the technology is asked to determine whether a specific person's face matches a reference image, and will return a yes or no answer. The technology is looking for a 100 percent match, and the technology makes the decision if it is a match. Because the technology is asked to make that 100 percent match determination, instances of false-positives or false-negatives are directly associated with the technology’s performance.



IDENTIFICATION

In identification, the technology is asked to provide a list of candidates whose faces are similar to a reference image, with the user setting a desired confidence level for the level of similarity. In the ACLU's test, they ran an identification search at an 80% confidence level, effectively asking the technology to return all images that were at least 80% similar to the reference image.


This test criteria meant that the technology returned a list of candidates that met the specified criteria, which included some individuals who were not the legislators being searched for. However, the ACLU characterized this as a "failure" of the technology and implied that the technology had misidentified the legislators as felons or wanted individuals.


The technology performed as intended during the test, and the ACLU's understanding of the results was incorrect. Facial recognition technology from different vendors can have varying capabilities, with some offering users the ability to adjust confidence levels and others having fixed settings. In addition, some systems allow users to specify the number of candidates they want returned in a search. This means that it's important to understand the capabilities and limitations of a particular facial recognition system before evaluating its performance.


In conclusion, the ACLU's test of Amazon's Rekognition technology on the California Assembly was not a failure of the technology, but rather a failure to accurately characterize the test and the technology's performance. To avoid incorrect or misleading conclusions, it is crucial to use facial recognition technology responsibly and to have a thorough understanding of how it operates to avoid misleading or incorrect conclusions.


Related Posts

See All

Since 2018, there has been a perpetual myth that facial recognition technology (FRT) is inaccurate, and worse, racially and demographically