top of page
Artboard 7-100.jpg

U.S. COMMISSION ON CIVIL RIGHTS

Statement From Hoan Ton-That, CEO Clearview AI

March 8, 2024

​

Dear Commissioners Garza, Nourse, Gilchrist, Adams, Jones, Magpantay, Kirsanow and Heriot,

​

It is an honor and a pleasure to participate in our conversation today, which covers the very important topic of technology’s impact on civil rights. As a person of mixed race, it’s especially important to me that technology is deployed into the world in a way that protects and enhances civil rights.

​

My name is Hoan Ton-That and I’m the founder and CEO of Clearview AI, a facial recognition search engine company. Our product is used by law enforcement and government agencies to solve crimes, such as child exploitation, murder, money laundering and financial fraud, as well as to investigate threats to national security. It is used in an after-the-fact forensic manner, not in real-time, and it only searches public information from the internet. 

​

Clearview AI’s facial recognition search engine has proven to be an extremely effective tool for law enforcement and national security agencies. For example, Clearview AI’s technology played an essential role in the investigation that followed the storming of the Capitol on January 6th, by helping law enforcement agencies investigate unidentified persons pictured engaging in violence that day.

​

Our technology has also been used by public defenders to help exonerate the innocent. In June 2023, the New York Times reported that a Florida man was facing 15 years in jail for a case of vehicular manslaughter that he did not commit. His public defender was able to use Clearview AI’s technology to identify another witness to the case from the police bodycam video. After the witness testified, the charges were dropped. 

​

​

Clearview AI’s Efforts to Maximize Accuracy & Accountability

​

Clearview AI has worked hard to validate our algorithm against external benchmarks and to create safeguards in our software and business practices that help protect civil rights and mitigate racial bias. As a result of these efforts, we offer one of the most accurate facial recognition systems in the world today. Here are some examples of our initiatives to ensure accuracy and accountability:

​

  1. Clearview AI’s technology is only available to government agencies and government contractors. We believe that this helps ensure our technology is being used in a manner consistent with the public good. 
     

  2. Clearview AI submits its algorithm to NIST’s Facial Recognition Technology Evaluation program, which offers the world’s most comprehensive accuracy and bias testing. In the NIST 1:N Face Recognition Vendor Test ("FRVT"), Clearview AI’s algorithm found the correct face out of a lineup of 12 million photos at an accuracy rate of 99.85 percent. This is much more accurate than the human eye. In the NIST 1:1 FRVT that evaluates demographic accuracy, Clearview AI’s algorithm consistently achieved greater than 99 percent accuracy across all demographics. As a person of mixed race, having an unbiased algorithm is important to me, so we have worked to ensure that our neural network algorithm is trained with data that is representative of all demographic groups. We are confident that our algorithm is much more accurate than the human eye across age, gender, and race.
     

  3. We have included strong auditing features in our platform to ensure that it is being used for legitimate law enforcement purposes. Every law enforcement officer that uses Clearview AI must identify each search and document its purpose by assigning a crime type and case number for each search, ensuring that all searches are tied to a legitimate investigation. Each law enforcement agency must also assign an administrator that conducts audits to ensure that every search is for a legitimate purpose. Clearview AI’s application includes a suite of tools that enables administrators to view all search activity in their agency and generate a variety of statistics and reports on how the tool is being used, ensuring that agencies can exercise proper oversight and control of how facial recognition is being used.
     

  4. We provide training on the responsible use of facial recognition to our customers. Every customer is instructed that facial recognition search results from Clearview AI should never be the sole source of an arrest. Independent investigation and confirmation is always required. 
     

  5. We do not display any “percentage match” score or value in software, ensuring that there’s always human review for any corresponding results by the investigator using Clearview AI. We are strong believers that technology’s role is not to replace human judgment, but rather to support human judgment, while speeding up time-consuming processes and minimizing mistakes.
     

  6. Unlike other facial recognition vendors, we use an algorithm to proactively check the resolution and quality of images that our users input to start a search (the “probe image”), and to generate an automatic warning if the probe image is classified as low quality. This protects against the risk of misidentification due to low-quality probe images.
     


High Quality Facial Recognition Systems Can Reduce Bias

​

I’m sure that you have read that facial recognition technology suffers from issues of accuracy and bias, especially with regard to faces of persons of color. Some vendors have marketed systems that exhibited what I consider to be unacceptable levels of accuracy and racial bias. However, facial recognition technology has evolved dramatically over the last few years. Today, the top performing algorithms are rated to have extremely high levels of accuracy in NIST testing. Dr. Charles Romine, then-Director of NIST’s Information Technology Laboratory, testified in 2020 before the House of Representatives’ Homeland Security Committee that the class of top-performing algorithms exhibit “undetectable” bias below the “statistical level of significance”.

​

Despite the technological advances, there is still the question of whether or not widespread adoption of facial recognition technology by law enforcement is a positive development for communities of color. I believe that it is. Below are some exemplary scenarios that can explain how FRT generally, and Clearview AI’s search engine specifically, help decrease systemic bias in the criminal justice system.

 

  1. According to the Innocence Project, 70% of wrongful convictions result from eyewitness misidentification, which is especially prevalent in cases where the eyewitness and the identified person are of different races. Technology like Clearview AI is, in many senses, much more accurate than the human eye, which is subject to faulty memory and inherent human bias in eyewitness scenarios. By reducing the need to rely on human eyewitnesses, we reduce reliance on one of the most inaccurate and racially biased identification methodologies in criminal justice.
     

  2. The status quo absent technology like Clearview AI is reliance on human eyewitnesses. And eyewitness misidentification is the number one factor in known instances of wrongful conviction, occurring in 190 out of the first 250 DNA exonerations. Most of these wrongful identifications were cross-racial, with a white victim wrongfully identifying a black defendant. This is consistent with the long-observed phenomenon that misidentifications are more common where the eyewitness and the identified person are of different races. Technology like Clearview AI can reduce the need to rely solely on human eyewitnesses, which in turn reduces reliance on one of the most inaccurate and racially biased identification methodologies in criminal justice. 
     

  3. Currently when law enforcement encounter a photo of a suspect from camera footage or elsewhere and are unable to identify them, they put out a “BOLO” (Be On the Lookout For) alert to surrounding law enforcement agencies with a description of the suspect, which typically includes race, gender, and physical description. This causes law enforcement to look for suspects who match that description, and question many people who may not be the suspect. It can involve unnecessary traffic stops and other police interactions with innocent people in the community. In a world with accurate facial recognition, the technology can help reliably identify the correct person sooner by searching against public information, preventing unneeded police interaction and biased eyewitness IDs.
     

  4. According to Justice Department statistics, Americans of color are disproportionately victimized by violent crime, at rates much higher than white Americans. Investigative facial recognition helps law enforcement agencies solve cases of violent crime quickly. Not only does this provide restorative justice to victims, who are usually members of marginalized communities, it also ensures that more resources can be dedicated to proactive crime prevention, community policing, and investigations of cold cases. The investigative efficiency made possible by facial recognition technology can help reduce the disproportionate toll that violent crime takes on communities of color.
     

  5. Some other facial recognition systems only search against datasets that are disproportionately composed of persons of color, as a result of pre-existing social inequalities, such as arrest records and so-called gang databases. Clearview AI is different. Because we can search a diverse dataset composed of more than 40 billion photos on the public internet, our searches are less likely to perpetuate systemic inequalities.
     

  6. Public defenders often lack the resources to properly investigate on behalf of their clients. Today, they can use facial recognition technology, including Clearview AI’s search engine, to help identify witnesses and suspects and to use that information to exonerate the innocent. 


We believe that facial recognition technology, properly used with training and oversight, can help combat discrimination, reduce systemic bias in the criminal justice system, and respect civil rights. We should not live in a world where police need to use imprecise descriptions which include race, height and gender to help identify people. Instead, facial recognition offers us an opportunity to live in a safer world with fewer unnecessary police interactions and misidentifications.

​

Every positive identification made with Clearview AI’s technology is also an instance in which a misidentification from reliance on eyewitnesses was prevented. This is the other positive side to facial recognition that is not talked about. 
 

​

Conclusion
 

Clearview AI has worked hard to engage with the public and other key stakeholders and address concerns. We have engaged with the media to build understanding and trust with the public, and proactively sought out members of Congress, and state and local elected officials, to educate them about what our technology does. I hope that my testimony here today will continue that process. I believe that these efforts have helped bring about a more accurate understanding of facial recognition technology and its positive impact. Public opinion polling consistently shows that broad majorities of Americans support use of facial recognition technology by law enforcement.

 

This support is not surprising, when we take into account the dramatic positive impact that investigative facial recognition technology can have for victims. Most recently, a dedicated Homeland Security Investigations task force was able to use Clearview AI’s technology in an operation that led to the identification of 311 missing and exploited children in only 3 weeks. HSI and other agencies were able to conduct multiple child rescues as a result. This outstanding result speaks for itself. We can use a highly accurate algorithm, with effective risk mitigation measures, to search public online images, and help protect the innocent while providing justice and safety to countless victims.

Civil-Rights-Commission.jpg
We believe that facial recognition technology, properly used with training and oversight, can help combat discrimination, reduce systemic bias in the criminal justice system, and respect civil rights. We should not live in a world where police need to use imprecise descriptions which include race, height and gender to help identify people. Instead, facial recognition offers us an opportunity to live in a safer world with fewer unnecessary police interactions & misidentifications.
Hoan.png

—  HOAN TON-THAT

Founder & CEO, Clearview AI
Hoan-Signature.png

HOAN TON-THAT

Founder & CEO,

Clearview AI

[1]  Michael McLaughlin & Daniel Castro, “The Critics Were Wrong: NIST Data Shows the Best Facial Recognition Algorithms Are Neither Racist Nor Sexist,” Information Technology & Innovation Foundation (Jan. 27, 2020).

​

[2]  See: Testimony of Dr. Charles Romine, House Committee on Homeland Security (Feb. 6, 2020), https://www.c-span.org/video/?469047-1/facial-recognition-biometric-technology (at approximately 33:00).

​

[3]  Brandon L. Garrett, Convicting the Innocent: Where Criminal Prosecutions Go Wrong 48 (2011).

​

[4]  Id.

​

[5]  Id.

​

[6]  See Violent Victimization by Race or Hispanic Origin, 2008–2021, Appendix Table 2 “Rate of violent victimization, by type of crime and victim race/Hispanic origin, 2017–21,” Bureau of Justice Statistics, Department of Justice, available at https://bjs.ojp.gov/violent-victimization-race-or-hispanic-origin-2008-2021 (July 2023). 

​

[7]  Lee Raine et al., Ai and human enhancement: Americans’ openness is tempered by a range of concerns, Pew Research Center: Internet, Science & Tech., available at https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/ (Mar. 17, 2022); see also Kay. L. Ritchie et al., Public attitudes towards the use of automatic facial recognition technology in criminal justice systems around the world, PLoS One. 2021 Oct 13;16(10); Rebecca Kern, Facial Recognition Tech Finds Public Support in Industry Survey, Bloomberg Government, Oct. 7, 2020 (“The survey found that 60% of adults have a favorable view of the technology and 70% support its use by law enforcement.”).

​

[8]  Thomas Brewster, Exclusive: DHS Used Clearview AI Facial Recognition In Thousands Of Child Exploitation Cold Cases, https://www.forbes.com/sites/thomasbrewster/2023/08/07/dhs-ai-facial-recognition-solving-child-exploitation-cold-cases (Aug. 7, 2023).

bottom of page