Facial Recognition Software: A Tool For Police Surveillance

Facial recognition software (FRS) has gained booming traction within the last decade as a viable security measure for cell phones and computers. However, the potentially dangerous implications of its use by government and police have sparked significant controversy. This conversation was top of mind for the public after George Floyd, an unarmed Black man, was killed by a Minneapolis police officer in May 2020. Black Lives Matter protesters around the world drew attention to the police’s use of facial recognition technology to surveil demonstrations, exposing how this technology is being used in far more intrusive ways than for mere device security. Instead, its use presents crucial policy questions regarding Americans’ personal privacy and civil liberties.

Silicon Valley companies have benefited from the commodification of this type of surveillance by contracting with law enforcement. According to a 2016 study by the Georgetown Law Center on Privacy and Technology, as many as one in four U.S. police departments have access to facial recognition and its use is highly unregulated. The global response to police brutality and FRS regulatory concerns motivated big players such as IBM, Amazon, and Microsoft to announce that they would temporarily halt the sale of this technology to U.S. police departments until federal laws are enacted. Microsoft President Brad Smith shared in a Politico interview, “We won’t allow our technology to be used in any manner that puts people’s fundamental rights at risk.” Despite the announcements by these household names, smaller companies stated they would continue working with law enforcement agencies.

Clearview AI is one of these small but prominent companies innovating the FRS industry. More than 600 law enforcement agencies used its facial recognition app in 2020 alone. Clearview’s database has over three billion images from popular sites like Facebook and YouTube, and New York Times analysts believe it has the potential to not only identify any person you pass on the street, but also contain information regarding their home address, job occupation, and personal contacts. Steering away from these privacy considerations, Clearview has instead emphasized the ways its technology is a crime-solving tool that aids law enforcement in targeting human traffickers. CEO Hoan Ton-That released a statement saying, “Clearview AI believes in the mission of responsibly used facial recognition to protect children, victims of financial fraud and other crimes that afflict our communities.” Clearview’s technology has not been tested by the National Institute of Standards and Technology (NIST), a federal agency that reports the performance of FRS.

While targeting criminals is useful in theory, evidence suggests that algorithmic bias is embedded within facial recognition software. The National Institute of Standards and Technology found that this disproportionately affects Black Americans, Asian Americans, and Native Americans at high rates, with Black women particularly affected. Researchers at NIST cited that these known flaws can lead to false positives at higher rates for people of color, which can lead to detrimental false accusations by the police. This was the case for Robert Julian-Borchak Williams, who was arrested because of a faulty facial recognition match used by Detroit police. Consequently, this disruptive technology has led policy experts, advocates, and Big Tech to turn to Congress, emphasizing the need for federal oversight of commercial use of facial recognition.

Some states are taking matters into their own hands and implementing stringent restrictions against this technology. Most recently, Minneapolis voted to ban the use of facial recognition for its police department, including Clearview’s software. This was a historic decision for the city, and it is one of numerous reforms to its law enforcement since George Floyd’s killing. Berkeley, Oakland, San Francisco, and Boston, among other cities, have passed new legislation restricting its usage. Portland has taken the most aggressive approach and has banned private companies from using FRS in public spaces, citing its algorithmic biases against Black people, women, and the elderly.

With facial recognition at the forefront of 2021’s agenda, companies and civil rights organizations have urged President Biden to provide federal oversight over the industry—and soon. With no official response by President Biden yet, it is uncertain what the future of facial recognition will look like. However, algorithmic flaws within FRS are not engineering issues exclusive to Silicon Valley, they have far reaching effects on personal privacy and civil liberties.