Minneapolis is yet another city that has taken the necessary step to ban facial recognition software for police use. After a series of reports over the last five years provided overwhelming evidence that the use of facial recognition software disproportionately negatively impacts minorities, several cities implemented this critical step in their long-term fight against aggressive policing. However, Minneapolis failed to enact policies prohibiting the use of this software for “non-police uses” or by “other local law enforcement who operate in the city.” How does this affect the prospective impact of a seemingly progressive policy?
What is facial recognition software?
Originally “developed in the early 1990s,” this software sifts through images of individuals from various sources, including “social media profiles and driver’s licenses,” employs face detection algorithms to “extract features from the face . . . that can be numerically quantified” to determine the level of similarity between the faces. These algorithms are consistently evolving through training processes, learning which are “the most reliable signals” that diagnose similarity.
Long before technology existed to automate this process, the same principles underlying facial recognition software were employed to confirm individual identities and “identify an unknown face.”
How do facial recognition software bans fit within a larger series of reforms against aggressive police tactics? What role does the private sector play?
The ban in Minneapolis arrives after a rather accelerated dismissal of City Council promises made to fundamentally alter the corrupt policing system. These promises included defunding the police – terminology that was quickly retracted and explained away just months later. It is relatively unclear what catalyst led to the proposal and widespread support for this ban after such an extensive retreat from previously planned reform. Some point to the action taken by several cities in the Fall of 2020 and encouragement from prominent organizations like the American Civil Liberties Union (ACLU) to prohibit this technology, while others acknowledge the “dystopian” nature of the tracking system and its propensity “for abuse;” but at the very least, it is certainly a positive step towards eliminating the systemic injustice.
While the ban may signal progress, it is crucial to acknowledge not only the delay in its implementation, but also the lack of widespread application and permissible use of this software by other local law enforcement agencies and private companies.
What are the legal implications of face recognition software use?
Many have discovered the face recognition software encounters difficulties “identifying people of color” which “could widen existing criminal justice disparities.” Additionally, the software draws on historical data from law enforcement sources, which have a proven history of racial biases. Finally, the software filters through all this data “without people’s consent,” presenting issues of privacy violation.
Georgetown University’s “The Perpetual Line-Up” report identifies at least four major areas of legal concern: violation of Fourth Amendment rights; encroachment of First Amendment rights to free speech and assembly; disproportionate and unequal impact on racial and ethnic minorities; and invasion of privacy rights given the lack of transparency and requests for consent from the public.
Other individual cities have enacted similar bans. What is the effort at the federal level?
The National Biometric Privacy Act of 2020 in the Senate, and the Stop Biometric Surveillance by Law Enforcement Act in the House were both introduced last year to effect widespread impact against the use of racially-biased and inaccurate software. While this proposed legislation illustrates eagerness for change at the federal level, tackling the use of the technology by both the public and private sectors, the bills have yet to move past the initial introduction stage or gain critical bipartisan support.
What does the future of facial recognition software look like for our legal system?
Some believe “police use of face recognition is inevitable,” and helps to identify individuals who may “otherwise have gone undetected.” While there may be truth underlying these assertions, opponents argue the technology has yet to advance to such a level where the community can be confident in the objective use of the system without racial bias or problems of inaccuracy.
Do we as a society continue employing a system flooded with perturbing defects with the knowledge of its constant evolution, ideally to a place of flawless implementation? Or do we protect our country’s minorities and our individuals’ rights to privacy and ban further use of inherently problematic software, at a federal level, until proof of the elimination of bias and increase in accuracy levels?