How We Made AI As Racist As Humans

Andriy Kusyy
3 min readAug 7, 2021

Voice and text recognition АI systems have become increasingly criticized — a recent study by researchers at the University of Massachusetts found that voice, chatbots and other artificial intelligence-based technologies are often unable to distinguish the speech characteristics of national minorities. [1] At first glance, the problem does not deserve attention, but due to the widespread introduction of such systems, ignoring it exacerbates discrimination against certain groups.

Credit: https://tcrn.ch/3s1O1o4

The authors of the study “AI programs are trained to ignore African American speech” were made by the University of Massachusetts professor Brendan O’Connor and one of his graduate students Su Lin-Blodgett. In the process of studying the bias of the functioning of speech and text recognition tools, experts compiled a database of 59.2 million posts on the social network Twitter, which contain African American slang and jargon.[2] The information obtained was subsequently studied using special services for natural language processing . It turned out that these tools are not always able to recognize what was written by African Americans, moreover, one of the services considered that the publications were written in Danish at all. In the course of the study, the experts also recorded the existing problems when using the systems for analyzing the meaning and sentiment of the text written by African Americans.

Credit: https://bit.ly/37oB7r5

The problem of bias in artificial intelligence algorithms is not new — in May 2016, an investigation was published that revealed serious flaws in the work of the COMPAS program (Northpointe Inc.), which is used to resolve the issue of parole in the United States. The study is based on the parole decisions of more than 10,000 inmates in Broward County, Florida. The authors’ fears of possible discrimination against black prisoners were confirmed — the system almost twice as often assigned a coefficient of “low risk of committing a second crime” to white prisoners than black prisoners (47.7% versus 28%), although within two years both racial groups. The situation is much more complicated with those who did not commit a second crime within two years after serving their sentence, but were disadvantaged by the COMPAS program — the proportion of African Americans who were assigned a coefficient of “high risk of committing a recidivism” was 44.9%, while prisoners — 23.5%. [3]

Credit:https://bit.ly/3Ctu2n9

In 2017, experts Keith Crawford (Microsoft) and Meredith Whittaker (Google) launched the Artificial Intelligence Today initiative, which aims to monitor artificial intelligence technologies in order to avoid cases of discrimination against social, gender and racial groups. According to experts, manufacturers of these technologies try not to notice the imperfection of the developed algorithms, since changes in them reduce the efficiency of the machine learning process. However, experts strongly recommend not to ignore the problem, since we are talking not only about the use of harmless chat bots, but also about making decisions based on artificial intelligence that parole. [4]

--

--