top of page
Clearview-AI-Principles.jpg

CLEARVIEW AI PRINCIPLES

OUR MISSION 

AI FOR PUBLIC SAFETY

Clearview AI’s mission is to create and deliver identification technology that helps combat crime and fraud, keep communities safe and industry and commerce secure, protect victims and promote justice.

We aim to help protect the public through processes that are consistent with protecting fundamental freedoms and human rights. We have developed and applied best practices to all uses and every user of our identity solutions.

WHAT DO WE DO & HOW DOES IT WORK?

 

Clearview AI acts as a search engine of publicly available images – now more than 50 billion -- to support investigative and identification processes by providing for highly accurate facial recognition across all demographic groups. Similar to other search engines, which pull and compile publicly available data from across the Internet into an easily searchable universe, Clearview AI compiles only publicly available images from across the Internet into a proprietary image database to be used in combination with Clearview AI's facial recognition technology. When a Clearview AI user uploads an image, Clearview AI’s proprietary technology processes the image and returns links to publicly available images that contain faces similar to the person pictured in the uploaded image.

Clearview AI currently offers its solutions to only one category of customer – government agencies and their agents. It limits the uses of its system to agencies engaged in lawful investigative processes directed at criminal conduct, or at preventing specific, substantial, and imminent threats to people’s lives or physical safety. In each case, Clearview AI requires its government customers to make independent assessments of whether there is a match between the images retrieved by Clearview AI, and the image provided by the customer. Each decision about an identification is made by a professional working on behalf of a government agency, not by an automatic process.

 

Clearview AI’s facial recognition algorithm is designed to take into account age progression, variations in poses and positions, changes in facial hair, and many visual conditions and to perform at 99% [1] or better across all demographic groups on key tests.
 

STANDARDS & POLICIES

 

Clearview AI has developed required standards and best practices for the use of facial recognition technology which it applies to every use of its solutions. These include:

Ensuring Accuracy

Clearview AI only provides results for human review using the same algorithm and match threshold settings that achieved 99% or better accuracy on key tests [2]. Those results are then subject to non-automated human review and verification. In any case where an image does not reach a high level of probability of being a true positive, the image is excluded from the results and not provided to the customer for any use, except as needed to protect children from crimes by adults, as detailed in the section “Protecting Children.”

Preventing Discrimination

Clearview AI limits its results from its software solely to images which are retrieved using the same algorithm and match threshold settings which achieved 99% [3] or better accuracy for every demographic group on key tests, to make certain that Clearview AI’s data is provided without bias across all groups, regardless of age, gender, ethnic background, or race. While Clearview AI’s results exceed the standard of 99% [4] accuracy on key tests for some groups, it has adopted the identical requirement for accuracy for all of its results to ensure that no group is put at risk through a lesser standard for identification except as needed to protect children from crimes by adults, as detailed below in the section “Protecting Children.” In cases where the algorithm cannot identify an image that meets the same match threshold that achieved 99% [5] or better results on key tests, Clearview AI returns no results.

Testing & Validation

Clearview AI conducts regular testing and validation of its system through industry-standard testing by objective third parties. It used a system developed by an independent U.S. academic institution to test and validate its system to verify that it meets its promised standard for 99% [6] or better accuracy for adults across a demographically diverse test set. The initial testing and validation was followed by Clearview AI submitting its most recent image analysis algorithm to the National Institute for Standards and Technology, which also validated Clearview AI’s technology as achieving 99% or better accuracy on its test which measures facial recognition performance across demographic groups. [7]

Protecting Security

Clearview AI has put into place measures to guard against the risk of unauthorized access or use through a secure, cloud-based platform which houses its more than 50 billion facial images, relying on end-to-end encryption for the transmission of data, inbound and outbound, and designing systems that create a record to enable legal authorities as needed to reconstruct each use of Clearview AI’s system to help them undertake lawful investigations. Clearview AI stores all live data on servers in a secured data center with strict internal access controls. Clearview AI retains one or more independent external organizations to provide security assessments of its systems annually.

Protecting the Privacy of Data Subjects

Clearview AI limits the data it collects from the Internet to information that has been made available online to the general public. It requires each of its customers to adhere to all applicable data protection laws in their use of Clearview AI’s technology and its data. Clearview AI does not share client uploaded probe images with any other entity, and does not share images from the public internet with anyone other than government customers engaged in lawful investigations and public protection, except as required by law, such as providing access to data subjects to data pertaining to them, when applicable.

Limiting Uses to What’s Legal, Ethical, and in the Public Interest

Clearview AI licenses its technology only for limited and lawful purposes. These include helping government agencies identify criminals after a crime has taken place and providing lead information to help track down people engaged in illegal conduct such as child exploitation, terrorist activity, or to investigate other specific, substantial and imminent threats to people’s lives or physical safety. [8]

 

Protecting Against Misidentification

Clearview AI’s system is hard coded to limit the return of false positives. It intentionally does not include match scoring or percentage matching with results. To limit the risk of a customer using the technology to identify the wrong person, Clearview AI’s system will return no results if the search falls below the 99% [9] threshold for accuracy, except as needed to protect children from crimes by adults, as detailed below in the section “Protecting Children.” Clearview AI requires that any possible match generated by the Clearview AI system be reviewed and assessed by a trained law enforcement agent to verify and validate the possible match, so that no identification of any person is based solely on automated results. Clearview AI also tests its system to detect and counter the risk of algorithmic biases with respect to race, gender, and age.

Preventing Abuse

Clearview AI imposes strict conditions on the use of its technology to limit its use to purposes that are both lawful and authorized. Clearview AI’s image database is cloud based, enabling it to halt the use of its technology by any customer who violates their obligations to use Clearview AI only for lawful and authorized purposes. Clearview AI will shut down the use of its technology by any customer, anywhere, as appropriate to investigate and to stop any abuse that may be identified, when Clearview AI is aware of such abuse.

Ensuring Accountability

Every use of Clearview AI enables the production of a unique, finished report, that contains all of the information used by its customer for an identification, as well as applicable metadata [10]. It also stores a search history containing the probe image, the purpose of the search and the identity of the searching user. Clearview AI thereby assures that this stored information can be made available by its customers to legal and judicial authorities, those charged with oversight to provide for accountability, and/or data subjects, as authorized by applicable law.

Protecting Human Rights

For every law enforcement use, Clearview AI has designed its system to require a lawful predicate. For example, a law enforcement agency must identify the specific category of crime under investigation after a crime has taken place prior to enabling a search of its database, or similar specified authority, such as an identification of a missing person. These controls make Clearview AI’s system for law enforcement generally usable only for post-event law enforcement. Clearview AI authorizes limited additional uses of its technology for governments to enable it to assist them in preventing specific, substantial and imminent threats to people’s lives or physical safety.

PROCEDURES TO PROTECT DATA SUBJECTS

 

Clearview AI requires all of its users to have in place processes and procedures to protect data subjects, so that its technology is only used for lawful and proper purposes consistent with the public good, and is not abused to threaten civil rights, civil liberties, and personal privacy. These procedures include:

Providing a Specific, Lawful Basis For Each Search Undertaken Using Clearview AI’s System

In the case of law enforcement investigations after a crime has taken place, Clearview AI requires the law enforcement agency to specify the particular crime(s) that are being investigated. In the case of specific, substantial and imminent threats to the public, Clearview AI requires the government agency to specify the lawful basis of the search and the reason immediate identification is needed to assist a person in carrying out lawful duties.

 

Requiring the Preservation of Data & Metadata

To protect the rights of each person who is identified through a process that makes use of Clearview AI’s System, Clearview AI has developed a system to ensure that every search is documented to maintain the integrity of the search and the ability to assess that it was done properly and lawfully. Clearview AI preserves and reports the metadata accompanying every search, which provides the date of the search, the nature of the information used to initiate the search (such as an image or other information in the possession of the law enforcement agency), and other information helpful to ensuring the integrity of the search process and its lawfulness.

 

Requiring Specialized Training to be Provided for All Users Authorized to Access Clearview AI’s System

Clearview AI does not make decisions that an image of a face is a particular person. It provides the results of a search of its database based on an image provided to Clearview AI by a law enforcement agency, and returns images which are produced using the same algorithm and match threshold which have achieved performance of 99% accuracy [11] or better on key tests. As part of the onboarding process, the law enforcement agency is required to have any personnel and agents who will be using Clearview AI’s technology and images participate in training programs before they are authorized to use the facial recognition system. In any use of Clearview AI’s system and database, a law enforcement agent must review the images and any relevant information in the possession of the government agency to determine whether there is a match, and to decide whether to undertake further investigative steps. Proposed matches must then go through a peer review process, so that a decision on whether there is an apparent match is subject to a further check by one or more persons in addition to the original agent. All of these steps are designed to protect the rights of the data subject, and to reduce the risk of mistakes.

 

Prohibiting Purely Automated Matching, Requiring Investigative Process for Each Match

Clearview AI does not license its system to law enforcement for purely automated matching. It requires that there always be a person exercising judgment before a match can be declared. Facial examiner training includes training in facial recognition system functions, interpreting results, best practices on public safety use of facial recognition technology, how to assess image quality and suitability for face recognition searches, proper and improper uses of image enhancement tools for image pre-processing, procedures and criteria for face image comparisons, candidate image comparison, annotation, background verification processes, and related processes and procedures, to promote accuracy and accountability.

TESTING & VALIDATION

 

Clearview AI undertakes regular internal testing and validation of its system to ensure that its algorithm meets or exceeds the 99% [12] or better requirement for accuracy for all demographics when tested. Clearview AI’s technology currently meets this standard for all groups, regardless of age, gender, ethnic background, or race, for all persons 16 years or older.

In the National Institute of Standards and Technology (NIST) 1:1 Face Recognition Vendor Test ("FRVT") that evaluates demographic accuracy, published on December 16th 2021, Clearview AI’s algorithm consistently achieved greater than 99 percent accuracy across all demographics. 
 
In the National Institute of Standards and Technology (NIST) 1:N Face Recognition Vendor Test ("FRVT"), published on December 16th 2021, Clearview AI's algorithm correctly matched the correct face out of a lineup of 12 million photos at an accuracy rate of 99.85 percent, which is much more accurate than the human eye.

Established by Congress in 1901, the National Institute of Standards and Technology, a division of the U.S. Department of Commerce, provides the marketplace with accurate and reliable information about companies’ measurable industrial and technology performance capabilities.
 

 

PROTECTING FUNDAMENTAL FREEDOMS

 

Clearview AI is committed to ensuring that its facial recognition system is used for the public good. Its proper uses include helping bring criminals to justice, stopping terrorists and child abusers from ongoing criminal conduct and protecting public safety while minimizing risks to individual privacy, civil rights, civil liberties, and other legally protected interests.

  • Clearview AI’s general standards and the policies and the procedures it has put into place to protect data subjects are all intended to fulfill that commitment.

  • In every license, Clearview AI requires its government customers to limit their uses of Clearview AI technology to those that are consistent with rule of law and civil rights.

  • Other requirements Clearview AI has put in place to protect human rights and fundamental freedoms include Clearview AI refusing to authorize the use of its technology to enable real-time use to enable government surveillance of any population or subgroup.

  • Clearview AI will suspend any customer from access to its technology, when it has concrete indicators of a potential abuse of its system. Clearview AI will undertake an investigation of any such indicator, and take appropriate action, including putting into place further restrictions on use, or terminating a customer to counter the risk of abuse.

 

PROTECTING CHILDREN

 

Children represent a special, and challenging, class for facial recognition purposes. Due to the facial changes that take place as a person matures, images of children are harder to identify with certainty as they age than are images of people who are 16 or older. Clearview AI also recognizes that privacy issues involving children are especially sensitive. Facial recognition technology should only be enabled for uses that protect children and never for any purpose that could harm any child.

Clearview AI’s technology is a tool of unprecedented power in the fight against child sexual exploitation. Protecting children also means empowering law enforcement to deliver justice to victims and stem the torrent of online sexual abuse material.

Accordingly, Clearview AI’s technology is authorized only for use of images of persons under the age of 16 for the purposes of protecting the child's safety, victim identification, when the child’s welfare is at risk, in connection with investigations of violent felonies, and to help protect against the spread of CSAM, where legally authorized. 
 

 

ENSURING LEGALITY

 

Clearview AI only licenses its technology for use in jurisdictions where such use is lawful. 
 
Clearview AI has designed its system to help achieve important public interests. The company adheres to applicable legal requirements in every aspect of its technology, from its acquisition and maintenance of images to its licensing of access to those images for facial identification for approved customers.

Clearview AI’s technology is designed to be used in a responsible and proportionate manner, as well as to be consistent with all applicable laws. Clearview AI’s systems have controls built-in that are intended to reduce the risk of abuse of its technologies, and to enable the company to terminate any user who engages in an improper or otherwise unauthorized use of Clearview AI.

[1] This accuracy percentage refers to the percentages of true positives results returned on the NIST Facial Recognition Vendor 1:1 Verification Test, or the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy on those tests. 

[2] This accuracy percentage refers to the percentages of true positives results returned on the NIST Facial Recognition Vendor 1:1 Verification Test, or the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy on those tests. 

[3] This refers to performance in the category of Demographic Effects, Dataset Application vs Border Crossing on the NIST Facial Recognition Vendor Test 1:1 Verification report card. Clearview AI’s search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy in that category of that test.

 

[4] Id.

[5] This refers to performance returned on the NIST Facial Recognition Vendor Test 1:1 Verification category or the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy on those tests.

[6] This accuracy percentage refers to the percentages of true positives results returned on the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy in that category of that test.

[7]This refers to performance in the category of Demographic Effects, Dataset Application vs Border Crossing on the NIST Facial Recognition Vendor Test 1:1 Verification report card. Clearview AI’s search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy in that category of that test.

[8] Clearview AI itself also uses the technology for administrative, training, development and similar lawful purposes. 

 

[9] This refers to the percentages of true positives results returned on the NIST Facial Recognition Vendor Test 1:1 or the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using an algorithm and match threshold consistent with greater than 99% accuracy on those tests. 

 

[10] Comprised of the online locations where relevant search results are found.

[11] This refers to performance on the NIST Facial Recognition Vendor Test 1:1 or the Megaface Rank 50 FaceScrub Test. Clearview AI's search results are produced using a methodology consistent with greater than 99% accuracy on those tests.

 

[12] Here, 99% refers to accuracy percentages as measured by the relevant categories in the NIST FRVT 1:1 Verification or NIST FRVT 1:N Identification tests.

 

Last Updated: June 18, 2024; updated to reflect 50B database size.

bottom of page