CLEARVIEW AI PRINCIPLES
AI FOR PUBLIC SAFETY
Clearview AI’s mission is to create and deliver identification technology that helps combat crime and fraud, keep communities safe and industry and commerce secure, protect victims and promote justice.
We aim to help protect the public through processes that are consistent with protecting fundamental freedoms and human rights. We have developed and applied best practices to all uses and every user of our identity solutions.
WHAT DO WE DO & HOW DOES IT WORK?
Clearview AI acts as a search engine of publicly available images – now more than ten billion -- to support investigative and identification processes by providing for highly accurate facial recognition across all demographic groups. Similar to other search engines, which pull and compile publicly available data from across the Internet into an easily searchable universe, Clearview AI compiles only publicly available images from across the Internet into a proprietary image database to be used in combination with Clearview AI's facial recognition technology. When a Clearview AI user uploads an image, Clearview AI’s proprietary technology processes the image and returns links to publicly available images that match the person pictured in the uploaded image.
Clearview AI currently offers its solutions to only one category of customer – government agencies. It limits the uses of its system to agencies engaged in lawful investigative processes directed at serious criminal conduct or at preventing specific, substantial, and imminent threats to people’s lives or physical safety. In each case, Clearview AI requires its government customers to make independent assessments of whether there is a match between the images retrieved by Clearview AI, and the image provided by the customer. Each decision about a match is made by a professional employed by a government agency who has been trained in examining facial images, not by an automatic process.
Clearview AI’s system is designed to take into account age progression, variations in poses and positions, changes in facial hair, and many visual conditions and to only provide outputs to customers that show true positives of 99% or better across all demographic groups.
STANDARDS & POLICIES
Clearview AI has developed required standards and best practices for the use of facial recognition technology which it applies to every use of its solutions. These include:
Clearview AI only provides results from its facial recognition software for human review when it can validate that a match with a comparison image already in the possession of its customer is 99% or better for an initial identification of an individual, which is then subject to non-automated human review and verification. In any case where an image does not reach that high level of probability of being a true positive, the image is excluded from the results and not provided to the customer for any use, except as needed to protect children from crimes by adults, as detailed in the section “Protecting Children.”
Clearview AI limits it’s results from its software solely to images at the 99% or better standard for every demographic group, to make certain that Clearview AI’s data is provided without bias across all groups, regardless of age, gender, ethnic background, or race. While Clearview AI’s results exceed the 99% standard for some groups, it has adopted the identical requirement for accuracy for all of its results to ensure that no group is put at risk through a lesser standard for identification except as needed to protect children from crimes by adults, as detailed below in the section “Protecting Children.” In cases where the 99% of better standard is not reached, Clearview AI returns no results.
Testing & Validation
Clearview AI conducts regular testing and validation of its system through industry-standard testing by objective third parties. It has used a system developed by an independent U.S. academic institution to test and validate its system to verify that it meets its promised standard for 99% or better accuracy for adults across all demographics.
Clearview AI has put into place measures to guard against the risk of unauthorized access or use through a secure, cloud-based platform which houses its more than ten billion facial images, relying on end-to-end encryption for the transmission of data, inbound and outbound, and designing systems that create a record to enable legal authorities as needed to reconstruct each use of Clearview AI’s system to help determine a person’s identity. Clearview AI stores all live data on servers in a secured data center with strict internal access controls. Clearview AI retains independent external organizations to provide security assessments of its systems annually.
Protecting the Privacy of Data Subjects
Clearview AI limits the data it collects from the Internet to information that has been made available online to the general public. It requires each of its customers to adhere to all applicable data protection laws in their use of Clearview AI’s technology and its data. Clearview AI does not share client uploaded images with any other entity than government customers engaged in lawful investigations and public protection, except as required by law, such as providing access to data subjects to data pertaining to them, when applicable.
Limiting Uses to What’s Legal, Ethical, and in the Public Interest
Clearview AI licenses its technology only for limited and lawful purposes. These include helping government agencies identify criminals after a crime has taken place and providing lead information to help track down people engaged in child exploitation, terrorist activity, or to investigate other specific, substantial and imminent threats to people’s lives or physical safety.
Protecting Against Misidentification
Clearview AI’s system is hard coded to limit the return of false positives. It intentionally does not include match scoring or percentage matching with results. To limit the risk of a customer using the technology to identify the wrong person, Clearview AI’s system will return no results if the search falls below the 99% threshold for accuracy, except as needed to protect children from crimes by adults, as detailed below in the section “Protecting Children.” Clearview AI requires that any possible match generated by the Clearview AI system be reviewed and assessed by a trained law enforcement agent to verify and validate the possible match, so that no identification of any person is based solely on automated results. Clearview AI also undertakes its own ongoing reviews of its system to detect and counter the risk of algorithmic biases with respect to race, gender, and age.
Clearview AI imposes strict conditions on the use of its technology to limit its use to purposes that are both lawful and authorized. Clearview AI’s image database is cloud based, enabling it to halt the use of its technology by any customer who violates their obligations to use Clearview AI only for lawful and authorized purposes. Clearview AI will shut down the use of its technology by any customer, anywhere, as appropriate to investigate and to stop any abuse that may be identified.
Every use of Clearview AI requires the production of a unique, finished report, that contains all of the information used by its customer for an identification, as well as applicable metadata. Clearview AI thereby assures that this stored information can be made available by its customers to legal and judicial authorities, those charged with oversight to provide for accountability, and/or data subjects, as authorized by applicable law.
Protecting Human Rights
For every law enforcement use, Clearview AI has designed its system to require a lawful predicate. For example, a law enforcement agency must identify the specific category of crime under investigation after a crime has taken place prior to enabling a search of its database. These controls make Clearview AI’s system for law enforcement generally usable only for post-event law enforcement. Clearview AI authorizes limited additional uses of its technology for governments to enable it to assist them in preventing specific, substantial and imminent threats to people’s lives or physical safety.
PROCEDURES TO PROTECT DATA SUBJECTS
Clearview AI requires all of its users to have in place processes and procedures to protect data subjects, so that its technology is only used for lawful and proper purposes consistent with the public good, and is not abused to threaten civil rights, civil liberties, and personal privacy. These procedures include:
Providing a Specific, Lawful Basis For Each Search Undertaken Using Clearview AI’s System
In the case of law enforcement investigations after a crime has taken place, Clearview AI requires the law enforcement agency to specify the particular crime(s) that are being investigated. In the case of specific, substantial and imminent threats to the public, Clearview AI requires the government agency to specify the lawful basis of the search and the reason immediate identification is needed to assist a person in carrying out lawful duties.
Requiring the Preservation of Data & Metadata
To protect the rights of each person who is identified through a process that makes use of Clearview AI’s System, Clearview AI has developed a system to ensure that every search is documented to maintain the integrity of the search and the ability to assess that it was done properly and lawfully. Clearview AI preserves and reports the metadata accompanying every search, which provides the date of the search, the nature of the information used to initiate the search (such as an image or other information in the possession of the law enforcement agency), and other information helpful to ensuring the integrity of the search process and its lawfulness.
Requiring Specialized Training to be Provided for All Users Authorized to Access Clearview AI’s System
Clearview AI does not make decisions that an image of a face is a particular person. It provides the results of a search of its database based on an image provided to Clearview AI by a law enforcement agency which its algorithms have found should be a true positive for 99% or better across all demographic groups. The law enforcement agency is required to have any personnel who will be using Clearview AI’s technology and images participate in training programs before they are authorized to use the facial recognition system. In any use of Clearview AI’s system and database, a law enforcement agent must review the images and any relevant information in the possession of the government agency to determine whether there is a match, and to decide whether to undertake further investigative steps. Proposed matches must then go through a peer review process, so that a decision on whether there is an apparent match is subject to a further check by one or more persons in addition to the original agent. All of these steps are designed to protect the rights of the data subject, and to reduce the risk of mistakes.
Prohibiting Purely Automated Matching, Requiring Investigative Process for Each Match
Clearview AI does not license its system to law enforcement for purely automated matching. It requires that there always be a person exercising judgment before a match can be declared. Facial examiner training must include training in facial recognition system functions, interpreting results, best practices on public safety use of facial recognition technology, how to assess image quality and suitability for face recognition searches, proper and improper uses of image enhancement tools for image pre-processing, procedures and criteria for face image comparisons, candidate image comparison, annotation, background verification processes, and related processes and procedures, to promote accuracy and accountability.
TESTING & VALIDATION
Clearview AI undertakes regular internal testing and validation of its system to ensure that it meets or exceeds the 99% or better requirement for a true positive for all demographics. It has found that Clearview AI’s technology currently meets this standard for all groups, regardless of age, gender, ethnic background, or race, for all persons 16 years or older.
In the National Institute of Standards and Technology (NIST) 1:1 Face Recognition Vendor Test ("FRVT") that evaluates demographic accuracy, published on December 16th 2021, Clearview AI’s algorithm consistently achieved greater than 99 percent accuracy across all demographics.
In the National Institute of Standards and Technology (NIST) 1:N Face Recognition Vendor Test ("FRVT"), published on December 16th 2021, Clearview AI's algorithm correctly matched the correct face out of a lineup of 12 million photos at an accuracy rate of 99.85 percent, which is much more accurate than the human eye.
Established by Congress in 1901, the National Institute of Standards and Technology, a division of the U.S. Department of Commerce, provides the marketplace with accurate and reliable information about companies’ measurable industrial and technology performance capabilities.
PROTECTING FUNDAMENTAL FREEDOMS
Clearview AI is committed to ensuring that its facial recognition system is used for the public good. Its proper uses include by helping bring criminals to justice, stopping terrorists and child abusers from ongoing criminal conduct and protecting public safety while minimizing risks to individual privacy, civil rights, civil liberties, and other legally protected interests.
Clearview AI’s general standards and the policies and the procedures it has put into place to protect data subjects are all intended to fulfill that commitment.
In every license, Clearview AI requires its government customers to limit their uses of Clearview AI technology to those that are consistent with rule of law and civil rights.
Other requirements Clearview AI has put in place to protect human rights and fundamental freedoms include Clearview AI refusing to authorize the use of its technology to enable real-time use to enable government surveillance of any population or subgroup.
Clearview AI will suspend any customer from access to its technology, when it has an indicator of a potential abuse of its system. Clearview AI will undertake an investigation of any such indicator, and take appropriate action, including putting into place further restrictions on use, or terminating a customer to counter the risk of abuse.
Children represent a special, and challenging, class for facial recognition purposes. Due to the facial changes that take place as a person matures, images of children are harder to identify with certainty as they age than are images of people who are 16 or older. Clearview AI also recognizes that privacy issues involving children are especially sensitive. Facial recognition technology should only be enabled for uses that protect children and never for any purpose that could harm any child.
Clearview AI’s technology is a tool of unprecedented power in the fight against child sexual exploitation. Protecting children also means empowering law enforcement to deliver justice to victims and stem the torrent of online sexual abuse material.
Accordingly, Clearview AI’s technology is authorized only for use of images of persons under the age of 16 for the purposes of protecting the child's safety, victim identification, when the child’s welfare is at risk, in connection with investigations of violent felonies, and to help protect against the spread of CSAM.
Clearview AI only licenses its technology for use in jurisdictions where such use is lawful.
Clearview AI has designed its system to help achieve important public interests. The company adheres to applicable legal requirements in every aspect of its technology, from its acquisition and maintenance of images to its licensing of access to those images for facial identification for approved customers.
Clearview AI’s technology is designed to be used in a responsible and proportionate manner, as well as to be consistent with all applicable laws. Clearview AI’s systems have controls built-in that are intended to reduce the risk of abuse of its technologies, detect any such abuse, and to enable the company to terminate any user who engages in an improper or otherwise unauthorized use of Clearview AI.
Last Updated: January 7, 2022