Racism And AI: Here’s How It’s Been Criticized For Amplifying Bias

Published 10 months ago
By Forbes | Arianna Johnson
Students Working On Computer Assignment During Seminar Group

TOPLINE

Although AI has become popular in recent months for its ability to perform advanced tasks and make life easier, there’s also increasing concern it can be used negatively, creating racial biases in the fields of healthcare, law enforcement and big tech.

KEY FACTS

Joy Buolamwini, a researcher at MIT, discovered some facial analysis software was unable to detect her face until she covered her dark skin with a white mask, and chalked this up to AI systems’ lacking data as they’re largely trained on white faces.

A major U.S. technology company claimed its facial recognition accuracy rate was 97%, but through her research, Buolamwini found the company’s training set was more than 77% male and 83% white.

Advertisement

report published in the Journal of Biometrics and Biostatistics found Black women between the ages of 18 and 30 are the demographic with the poorest facial recognition accuracy.

A 2022 study conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology programmed AI-trained robots to scan blocks with people’s faces from different races.

After the robots scanned the faces, they were tasked with designating which blocks were criminals—they consistently labeled the blocks with Black faces as criminals.

NEWS PEG

In 2015, Google Photos came under fire after its AI-based search tool pulled images of Black people when the term “gorilla” was searched. After this controversy was revealed, Google stopped its software from categorizing any image as a “gorilla” and promised to fix the problem. However, after saving pictures of gorillas and different primates, a recent New York Times report found that the issue hadn’t been fixed, and Google Photos’ software was still programmed to prohibit generating any images categorized as gorillas. It wouldn’t generate images for searches of any primates.

Advertisement

CRUCIAL QUOTE

“AI is just software that learns by example,” Reid Blackman, who has advised companies and governments on digital ethics and wrote the book Ethical Machinestold CNN. “So, if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes . . . you’re going to get outputs that resemble that.”

CONTRA

According to a Pew Research poll, 53% of Americans who believe racial and ethnic bias in the workplace is a problem think it would get better if employers used AI more in the hiring process, asserting that AI might potentially be beneficial in fighting against racial bias.

DERMATOLOGY

According to a study published in Science Advances, diagnostic and decision support tools used in dermatology are AI models trained on a dataset of skin tones and skin conditions. However, the dataset the AI is trained on is limited, as it lacks images of uncommon diseases, as well as diverse skin tones. A study published in the Archives of Dermatological Research found that Covid disproportionately affects non-white ethnic groups. However, the dataset used to train dermatologists included 92% of pictures of Covid skin lesions on fairer skin tones, only 8% showed lesions on more olive skin tones, and none of them included dark skin tones. Because skin conditions show up differently in different skin colors, the need for diverse training is essential. For example, melanoma is rare in African Americans, according to a Journal of the American Academy of Dermatology report, but because training and education on dark skin tones in the field of dermatology is poor, it’s more likely to go undiagnosed than in white Americans, leading to a drastically lower survival rate. A 2019 study published in the British Journal of Dermatology found 47% of dermatologists felt they weren’t adequately trained to diagnose skin disease in people of color.

BIG TECH COMPANIES

In 2022, Apple was sued over allegations that the Apple Watch’s blood oxygen sensor was racially biased against those with a darker skin tone. The Blood Oxygen app can measure oxygen levels in the blood by using sensors on the wrist, providing users with insights into their health, the app claims. The lawsuit points to the Food and Drug Administration’s review of pulse oximeter technology created after the Covid pandemic increased its use in hospitals. The review found Black patients were almost three times more likely to have dangerously low blood oxygen levels go undetected by pulse oximetry compared to that of white patients. In an October 2022 white paper, Apple acknowledged the potential difficulties that come with pulse oximetry taking measurements on darker skin, but claimed its device “automatically adjusts . . . to ensure adequate signal resolution across the range of human skin tones.” In 2020, Twitter released its photo cropping feature, which crops and focuses on photos to allow for a better viewing experience, especially when scrolling through Twitter’s timeline. However, it was soon revealed that the tool favored white faces, focusing on their faces in the photo preview and only revealing the Black faces when the users clicked to view the full photo. This led Twitter to research the algorithm, and it found in 2021 that it indeed had racial bias that favored white faces, leading to the company ditching the feature.

Advertisement

LAW ENFORCEMENT

Facial recognition software is used in a number of different entities, including on smartphones, for police face recognition, airport passenger screening and employment decisions. However, the AI behind facial recognition has proven to be used to promote a racial bias. According to Harvard’s Science in the News, police facial recognition software is trained on mugshot data, in which Black people are overrepresented. Racist policing strategies contribute to this, like the New York Police Department maintaining a database of “gang-affiliated” people made up of 99% Black and Latino people, though there are no requirements to prove these people are gang affiliated, thus incentivizing false reports. Research published in Science Direct found law enforcement agencies that use AI-based facial recognition technology disproportionately arrest Black people due to their overwhelming presence in the training dataset. According to the Algorithmic Justice League, “Face surveillance threatens rights including privacy, freedom of expression, freedom of association and due process.” Its controversial use in law enforcement has led to several cities across the country banning police from using facial recognition.

Israeli authorities are relying on facial recognition to track Palestinians and restrict their passage through key checkpoints, according to a report by Amnesty International. The software, known as Red Wolf, is used at checkpoints, scanning Palestinians’ faces and using a color-coded system of either green, yellow or red to determine whether to allow them to pass, stop them for questioning or deny entry. This is just the latest example of Israel restricting Palestinians’ freedom of movement as the software heavily focuses on Palestinians, Amnesty argues, and the report calls it an “automated apartheid.” China has also used AI and facial recognition to identify minorities. The country came under fire for using the technology to racially profile the Uyghurs, a vastly Muslim minority group, the New York Times reports. The facial recognition technology is embedded into China’s widespread network of surveillance cameras and exclusively looks for Uyghurs based on how they look, keeping a record of where they’re coming from—and going to. The U.S. even sanctioned SenseTime, China’s largest facial recognition firm, for its alleged role in the surveillance of the Uyghurs.

Advertisement