Behind the scenes of algorithm on the internet and how it is arranged

Digital Racism through AI software

Reflection of Biases

Biases are inescapable, even in artificial intelligence. AI software isn’t immune to reproducing racist practices. There are many examples on the internet of racist AI creations. AI-generated artwork where the women are fair-skinned, or even artwork of black women or people of color being light skin and having wavy 2c hair, are a few examples of these racist results. It’s not surprising when you take into account the biases of users who enter racist search terms in the search engines. Google, for example, had its own controversy over seven years ago when a black high school student pointed out the differences in the results for a search of 3 black teenagers vs. 3 white teenagers. Google then pointed out that the search results reflect the biases in what people search for online.

A screenshot of the image results that come up when one searches "three black teenagers" on Google. Images of mugshots of black teenagers appear, with the few exceptions being a magazine cover and a collage of three pictures of a black teenage girl.

The anti-blackness of AI software

Joy Buolamwini, a digital activist, brings awareness to the biases that artificial intelligence perpetuates. Her experiences with AI software included the software not being able to recognize her face unless she wore a white mask. She has engaged in her pursuits for justice in the use of digital software, and within these pursuits she has discovered and studies the issues these systems cause. The use of AI software in police and legal regards is scary. The surveillance of people’s bodies has resulted in further harm to innocent civilians, as evident in the South Wales case where the faces of over 2,400 misidentified innocent people were stored by the police department without their knowledge or consent and the department
reported a false-positive facial identification rate of 91%. Another case is the use of AI to rate videos of interview candidates and determine the best one.

These cases of anti-blackness in AI presents many problems that will persist if we treat it as ineivitable and normal. Facial analysis technology should NEVER end up in the hands of law enforcement, and using AI to determine viable candidates for a hiring process is abysmal and lazy.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *