NOVA PBS Official
Are You Feeding a Powerful Facial Recognition Algorithm?
Facial recognition technology has great potential to help law enforcement identify suspects. But collecting and storing data from online photos has raised concern among critics.
Clearview AI, which has more than three billion faces in its database, is the largest-known facial recognition database in the U.S. And, because it continuously gathers data from open-source internet pages—news, mugshot, and social media sites, and even "private" platforms like money transfer app Venmo—Clearview AI’s database is always growing.
Supporters argue that services like Clearview AI were essential to help identify (and ultimately charge) more than 400 of the January 6 Capitol rioters, many of whom were found through “digital breadcrumbs” like photos, location data, and surveillance footage. While traces of online data can be leveraged to help law enforcement investigate suspects, artificial intelligence (AI) software, particularly facial recognition technology, has a darker side.
"Artificial intelligence has the veneer of being objective," says Janai Nelson, Associate Director-Counsel at the NAACP. "We have been very concerned about the inputs into these systems that often produce racially-discriminatory results." Typically trained on a majority of white faces, AI often incorrectly identifies people, particularly those of color, and therefore shouldn’t serve as the only means of identifying a suspect, former FBI agent Doug Kouns says. Privacy breaches are also a concern, critics say.
Canada has already outlawed Clearview AI, stating it poses a violation of privacy rights, and has ordered the app to remove Canadian faces from its database. Now, its use is also being challenged in Illinois and California.
Are the benefits of facial recognition technology worth the general public’s loss of privacy—and possibly even control?