In a world where bias is everywhere, it’s impossible to achieve true neutrality. This is true for even seemingly neutral developments in technology that have no inherent concept of bias. Because we live in a world fraught with systemic inequality and racism, the technologically powered machines humans have created are engrained with the very biases that perpetuate racist and discriminatory notions of thinking. In this article, I seek to discuss the bias that exists in modern Artificial Intelligence (AI) technology, the consequences of such bias, and possible solutions presented by experts in the field.
There is no doubt that AI has its benefits. Many of its creators and adapters saw the “potential to stimulate economic growth [by way of] increased productivity at lower costs, a higher GDP per capita, and job creation.” AI is also regularly used in the education, housing, employment, and credit industries. Furthermore, AI has been used to “revolutionize our approach to medicine, finance, business, media and more.” While all of this is positive news for world advancement, it would be irresponsible to ignore the bias that already exists in these industries, which in turn, creates biases in the AI systems that they implement. Some experts knew that AI would inherent our discriminatory practices which negatively affect people in marginalized communities on a fundamental level. In Olga Akselrod’s article, How Artificial Intelligence can Deepen Racial and Economic Inequities, she shares that “rather than help eliminate discriminatory practices, AI has worsened them — hampering the economic security of marginalized groups that have long dealt with systemic discrimination.” An example of this bias is given by Khari Johnson who wrote an article for Wired, A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI. In that article, Khari shares how “algorithms used to screen apartment renters and mortgage applicants disproportionately disadvantaged Black people [due to the] historical patterns of segregation that have poisoned the data on which many algorithms are built.” It feels impossible to create neutral technology when humans have bias seeped into the DNA of our culture. The Federal Trade Commission writes that a further consequence of this bias in healthcare is that “[technological advancements] that [were] meant to benefit all patients may have [also] worsened healthcare disparities for people of color” and as a result, “the economic and racial divide in our country will only deepen.” Every industry that uses AI is affected by this bias and must do the work to make sure that they are not only aware of how it affects communities of color, but also take active steps to curtail its effects.
Fortunately, experts have suggestions to address this problem. Prior to addressing solutions, I would be remiss to ignore the reality that the vast amount of people in the technology industry is cis-gender white men. Rashida Richardson, one of the few women of color in the tech space, explained that “AI can benefit from adopting principles of transformative justice such as including victims and impacted communities in conversations about how to build and design AI models and make repairing harm part of processes.” Richardson advocates for an inclusion that positively impacts the algorithms. I would further advocate that the makers and users of AI technology begin to actively and purposefully ask marginalized people to share their expertise and experience on all levels– from intentional focus group participants to becoming co-creators and co-implementers of the technology.
Other experts in the field share the following solutions to ending bias in AI technology. Andrew Burt offers a potential solution to ending bias in AI technology and argues that the first step to fixing this problem is to start by “looking at a host of legal and statistical precedents for measuring and ensuring algorithmic fairness.” By this, he means looking at U.S. laws in other areas like civil rights, housing, health care, and employment to understand how these sectors have attempted to tackle discrimination problems and as a model for AI. Burt acknowledges the complexity associated with completely eliminating discrimination but further urges industries and businesses to 1) carefully monitor and document all their attempts to reduce algorithmic unfairness and 2) generate clear, good faith justifications for using the models they eventually deploy. A common concern around these solutions is that it gives businesses that use AI too much of a choice to either implement these solutions or not. Luckily, the Federal Trade Commission (FTC) is working to tackle this by implementing laws like Section 5 of the FTC Act, The Fair Credit Reporting Act, and The Equal Credit Opportunity Act. Each of these laws are attempts to address this issue from a legal standpoint. The FTC effectively puts weight behind Burt’s solutions by making it clear that if AI implementers fail to hold themselves accountable, they should be prepared for the FTC to do it for them.
In the end, the best solution is Richardson’s proposal to invite victims and impacted communities into the conversations around building and designing equitable AI models. As an individual affected by the biases in AI tech, Richardson acknowledges and understands the importance of inclusion. She believes that this involvement is vital if experts aim to devise creative solutions that help avoid as much harm as possible towards marginalized groups.