Person laying inside an MRI machine and wearing a mask covering their face. Lasers pass over their head and face.

Racial Recognition

Posted

in

by

Facial recognition technology is familiar to many of us; if you have an iPhone model from the past several years, your phone probably uses your face to unlock. Perhaps you thought it was creepy when the technology was first rolling out, but now it’s become commonplace, and your perception of it has likely changed. But the tech is far from perfect…in fact, like many things we’ve been discussing, it has racial prejudice baked into it, and some of its applications are downright disturbing.

“I Don’t See [People of] Color”

You’re probably familiar with that old, highly problematic saying “I don’t see color,” right? Well, it turns out that some facial recognition technology literally doesn’t recognize people with dark skin tones. Joy Buolamwini discusses the matter in her op-ed for the New York Times, and it’s something she’s been researching ever since she personally experienced the AI discrimination as a college student. Buolamwini tells us that there are some disturbing well-known examples like Google’s photo app labeling black people as gorillas in photos and tech that works great for white men and not so great for others, but the issues goes beyond that.

Prejudiced Recognition

Take the company HireVue. Their products allow employers to interview job applicants on camera and then us A.I. to rate the videos of each candidate according to verbal and nonverbal cues. HireVue’s aim is to reduce bias in hiring, but due to what Buolamwini calls “the coded gaze,” these products might just be perpetuating biases. If most previously hired candidates were white men, the A.I. will be looking for verbal and visual cues common to white men…excluding all other people in the process. While reading this, I also thought about autistic people who may not use the same verbal and body language cues as neurotypical people. Even if such a person does well on their job interview and is otherwise qualified for the job, the A.I. would probably give them a low score and the employer would pass them by.

Back to Surveillance

And our discussion loops back to surveillance (a frequent topic in some of our previous posts) because…the faces of half the adults in the U.S. are currently in facial recognition databases. The police can search these networks without warrants. And, of course, these searches rely on tech that hasn’t been tested on accuracy in non-white groups. Innocent people can get misidentified and accused of crimes (which happened to Robert Julian-Borchak Williams in 2020). And even if facial recognition gets better, police can still use it for some pretty unsavory things, like arresting people who attend protests. If the fact that your face is stored somewhere and that A.I. can use it to accuse you for a crime isn’t disturbing, then I don’t know what is.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *