top of page

What Clearview AI has Implemented to Ensure That Facial Recognition Technology is Used Responsibly

CLEARVIEW AI PARTNERS WITH VICTIM RIGHTS ADVOCATE TO PREVENT FRT ABUSE

At Clearview AI, every decision we make is grounded in a commitment to ensuring that our technology is used ethically to make society safer. Our mission as a company is to provide technology that can help prevent and solve crime and fraud. Our software is faster, more robust, and more accurate than any competing platform or traditional identification method. We believe that we can help governments do their work of protecting the public while respecting privacy.


With any new technology, we want to ensure that it is used for the best and highest purpose, while proactively limiting any potential downsides of the technology. We have implemented both technical solutions and human processes and procedures that significantly reduce the chance of misidentification and misuse.


I am writing this to address typical questions about facial recognition and provide information about how our platform works in reality, and the structural safeguards we have in place to carry out our commitment to safe, ethical facial recognition technology.


Making the technology solely available for law enforcement use.




HOW DOES OUR PLATFORM WORK?


Who may access Clearview AI’s platform?

Clearview AI’s facial recognition search engine is not a consumer application. Our only current customers are government agencies. Our platform may only be used to assist government agencies in the course of law enforcement investigations or in connection with national security matters.


Each customer is vetted to ensure that they are a legitimate government agency, and have the appropriate authorization in order to start a trial. We encourage each customer to have a public facing Facial Recognition Policy, outlining the use cases, situations, and types of crimes for which they will use facial recognition.


Training protocols

Even though Clearview AI offers a simple, intuitive, and easy-to-use facial recognition platform , we provide training before any individual may access our software. We also require our agency users to appoint administrators to oversee the use of Clearview AI by their employees and manage access. We provide reporting tools that enable them to generate usage reports and audits. Our terms of use require our users never to rely on the results we provide as the sole means of identifying a suspect. Each possible match must be confirmed by independent, corroborating information.


Not real-time surveillance

Clearview AI’s database is not used for any real-time surveillance. Surveillance is the live monitoring of behavior, activities, or information. A platform like Clearview AI is used to generate leads connected to an incident after an event has occurred. The process of uploading an image of a suspect, victim, or person of interest after an incident occurs is not “monitoring” or “surveillance” of an individual. Rather, it is an information gathering step in the investigative process.


After-the-crime investigations

Clearview AI’s investigative platform is only used for after-the-crime investigations. What that means, is if a crime has occurred, and there is a photo of the suspect or the victim who is unidentified, law enforcement can search that photo against Clearview AI’s publicly available database to help in the investigative process. The use of facial recognition in this case is the beginning of an investigation, not the definitive answer to an investigation.


Strict standards for accurate results

Unlike other facial recognition technologies for law enforcement, Clearview AI doesn’t provide a percentage match score, or distance score next to its results. This pushes the investigator to click the web link associated with the result, and not rely on algorithms (no matter how accurate) as the only piece of information for making identifications.


Also, Clearview AI’s system cannot be changed to modify the confidence interval cutoff for the number of results displayed. Some other facial recognition systems for law enforcement always return a defined number of results, no matter how accurate or inaccurate they may be. These other systems also can be configured to return more results at a higher confidence interval. Clearview AI’s system cannot be modified in this way. Our threshold for showing results is very strict. We would rather show no results, than show a false positive.


Common uses

After initial training, law enforcement can use our platform to help identify a potential criminal suspect by comparing an image law enforcement already has with images on our database. Oftentimes this comes from images that record a suspect in the course of the commission of a criminal act. This has included obtaining information on a suspect’s identity from pictures of sexual assault obtained by a law enforcement agency from online images of child sexual abuse material.

Case Documentation

Before processing an image, law enforcement must provide information about the suspected crime and intended use. This information is saved for agency audits, and helps the agency’s command staff identify any potential misuse of our platform which would be grounds for preventing future use. After filling out an intake form, law enforcement can process an image through our platform, and our proprietary AI searches our database for a facial match.


Results are returned with a link to where the image was found publicly-available online. Our platform does not provide personally identifying information like name, address or date of birth - only images and links to online sites where the images appear. It is up to the investigator to follow those links and do more research to find additional information to make an identification.


Industry-leading accuracy

Our powerful algorithm’s performance meets the gold standard for facial recognition. Clearview AI’s algorithm can pick the correct person out of a lineup of 12 million photos, with a staggering 99.85 percent accuracy. This accuracy has been verified by third-party testing at the National Institute of Standards and Technology (NIST), and works with substantially equal effectiveness regardless of race, age, gender or other demographic features.


Bias-free policing

In the NIST 1:1 Face Recognition Vendor Test ("FRVT") that evaluates demographic accuracy, Clearview AI’s algorithm consistently achieved greater than 99 percent accuracy across all demographics.


According to the Innocence Project, 70% of wrongful convictions result from eyewitness lineups. Accurate facial recognition technology like Clearview AI is able to help create a world of bias-free policing. As a person of mixed race this is highly important to me.


Furthermore, the application of accurate, non-biased facial recognition technology can decrease the chance of the wrong person being apprehended. For example, it's much preferable to have law enforcement accurately identify someone, as opposed to looking for a general description, where wrongful detention, apprehension, and arrests are more likely, especially for those in black and brown communities.


No wrongful arrest

As I mentioned earlier, our commitment to accuracy means that we would rather return no matches than a false positive and our search result threshold is intentionally set to ensure that the chance of an individual being misidentified are small, and users cannot change this setting. To date, there has never been a reported wrongful arrest due to an agency using Clearview AI.



Making the technology solely available for law enforcement use.




HOW DO WE COLLECT INFORMATION FOR OUR DATABASE?


Only public information

Clearview AI’s image repository consists of public data that can be obtained by a typical Google search. The images in our database come from news media sites, mugshot websites, public social media, and other open sources. This means, if the content of your social media post is in private mode, then it won’t appear in Clearview AI search results.


A larger dataset prevents bias

Clearview AI’s searches against billions of public images, which reduces potential bias and increases accuracy in several ways. First, the chance of an incorrect search result is lower when the dataset contains an accurate search result, which is more likely when a dataset is larger rather than smaller. Secondly, Clearview AI’s public online dataset reduces or eliminates demographic bias caused by selection effects.


Searches of the public internet enable law enforcement to go beyond images of individuals in their area. Many crimes cross state or international borders (especially human, gun and drug trafficking, as well as online sexual exploitation of minors). Because of the breadth of our database, Clearview AI is the most effective facial recognition tool for law enforcement agencies who need to identify and locate potential suspects from outside their own jurisdiction.


​​Every photo in the dataset is a potential clue that could save a life, provide justice to a victim, prevent a wrongful identification, or exonerate an innocent person.



Providing monitoring tools for agency administrators.




SOCIAL MEDIA PRIVACY

Clearview AI’s facial recognition search engine operates similarly to other search engines. The only social media images searched by Clearview AI are images that are available to the general public. When a user modifies his or her privacy settings to prevent the general online public and other search engines from viewing a particular image, he or she also restricts Clearview AI’s access.


Making the technology solely available for law enforcement use.




STATE-OF-THE-ART DATA SECURITY

Clearview AI is committed to ensuring that our data is protected by the highest standards of data security. We have invested significant resources to provide our platform with world-class cybersecurity protection. Our platform is third-party certified as SOC 2 compliant for security, is subject to regular penetration tests, and features advanced data encryption, as well as intrusion detection. Look for a future blog on this topic!


Making the technology solely available for law enforcement use.





THANK YOU

We are committed to ensuring that Clearview AI’s technology is used for good, and ensuring it does not fall into the wrong hands. We receive daily testimonials from our law enforcement customers sharing how Clearview AI has been able to solve crimes such as human trafficking, financial fraud and money laundering, and to protect the innocent.


We are open to regulation and we are honored to be at the center of the debate on privacy, safety and security. Thank you for taking the time to read about how our technology is used in practice, and how facial recognition can make the world a safer place.


___


HOAN TON-THAT

Co-Founder & CEO, Clearview AI

A self taught engineer, Hoan Ton-That is of Vietnamese and Australian heritage. His father's family descended from the Royal Family of Vietnam. As a student, Hoan was ranked #1 solo competitor in Australia’s Informatics Olympiad. He was ranked #2 guitarist under age 16 in Australia’s National Eisteddfod Music Competition. At the age of 19, Hoan moved from Australia to San Francisco to focus on his career in technology. He created over twenty iPhone and Facebook applications with over 10 million installations, some of which ranked in the App Store’s Top 10. Hoan moved to New York City in 2016. In 2017, he co-founded Clearview AI and focused his energy on developing the core technology, raising capital, and building the team and product.

bottom of page