Tag Archives: bias

How Can We Mitigate Bias In AI?

By Mahbuba Sumiya

Detroit, Mich.

Facial recognition software—used by millions—doesn’t properly identify people of color. This technology was meant to provide accurate results, but alarmingly, “nearly 40 percent of the false matches by Amazon’s tool … involved people of color,” according to Queenie Wong, a staff reporter for CNET News. Amazon’s face-ID system recognized Oprah as male, wrongly matched 28 members of Congress to a mugshot database, and detected a Brown University student as a Sri Lanka bombing suspect.

Algorithms are learning to adapt to society’s stance on racial biases. They’re programmed and trained by showing millions of human pictures; however, if the algorithms are trained with only white faces, they won’t be able to recognize any other types of faces. Artificial intelligence (AI) can only be smart if they are trained with fair data. If an AI is trained with millions of faces that are people of color, then it would not have a hard time recognizing those faces accurately.

Joy Buolamwini, founder of the Algorithmic Justice League, researches the social implication of artificial intelligence, and recognized the biases that companies like Microsoft, IBM, and Amazon have in place for AI services. While at MIT as an undergraduate, Buolamwini tried out an algorithm called Coded Gaze as part of an assignment. She learned that the system recognized her light-skinned friend’s face better than her own. When Buolamwini put on a white face mask, it was able to detect her face.

Racism exists in computer algorithms because of individual values. If people did not care about how the person next to them looked, racism would not still be America’s biggest problem. People being wrongly arrested because of false detection is not ethical. If people are fighting for justice, they must fight for justice in everything. Racial justice must equal algorithm justice.

Plus, even if algorithms are trained with antiracist databases, accuracy continues to be an issue. The National Institute of Standards and Technology (NIST) stated in May 2020 that Asians and African Americans had false positive rates even when they programmed computers with 8.49 million faces. Will AI ever be fair to people of color?

Growing up in a generation where algorithms are becoming more and more prevalent, it’s hard to recognize machine bias—a problem that will continue to amplify inequality in future generations, if left unchecked. We must train AI to be fair and neutral. But with the current state of the field, this may prove difficult. Computer science tends to attract more men than women—only about 25 percent of computer scientists in the United States are women. Minority racial groups are also not represented equally in tech industries. Having more diverse points of views in this field can prevent us from training computers with biased data. In society, a woman might be associated with teaching, childcare, or nursing, but we should not use these existing societal assumptions when building an algorithm.

Luckily, some businesses are taking small steps to measure and minimize bias, including IBM’s Fairness 360 (an open source allowing developers to examine, report, and mitigate bias within the machine learning model), according to Macy Bayern, as associate staff writer for TechRepublic.

After all, the only way we can eventually move forward with AI fairly is by allowing diverse people to be engaged with tech industries.