People are making AI to increase the efficiency, reliability, and to perfect human errors. But somehow, bias and discrimination are still very prevalent in AI, especially racial bias.

Racial bias is very common and is very common in AI and can cause a lot of harm to people. One of the ways it can be very harmful is in court. People are slowly turning to AI to determine one’s sentence in jail, but there are many complications to it.

The Black defendants’ Risk scores are fairly even, but White defendants’ risk scores are skewed to the right. This graph predicts how likely a person is going to commit a crime with 1 being the least likely and 10 being the being the most likely. The way that white defendants’ risk scores are more likely to be 1 indicates bias against black people.

Reasons:

Another reason for racial bias in AI system are humans. Black people are more likely to be wrongfully convicted than white people.

This chart shown below shows the percentages of someone being wrongfully convicted.

This data is put into an algorithm that a person is creating. The algorithm would have data supporting the fact that more black people are in jail. So when an AI machine detects a person’s race is black, the machine will have a higher probability of concluding that they will be committing a future crime. The data a human feeds the machine is biased to begin with making the whole machine biased. Now this is only one way racial bias is present in AI machines and their algorithms.

COMPAs is the AI machine used in a Wisconsin court to predict the likelihood that convicts will commit crimes. ProPublica found out that the risk assessment was biased against black prisoners. The machine was incorrectly flagging black prisoners to commit a crime again more that white prisoners. 45% black people to 24 % white people. This caused black prisoners to have longer sentences.

Just like it was written in the Guardian by Stephanie Buranyi, “Computers don’t become biased on their own. They need to learn that from us.”

Facial recognition is becoming more and more used in court, thus bringing in another way for racial bias. According to The Newscientist IBM Microsoft AI machine could “correctly identify a person’s gender from a photograph 99 percent of the time”, but for dark-skinned women it was only 35%. This all boils down to the date that was fed to the algorithm. There were probably more white men that were accounted for in the system than women and minorities, thus helping the machine detect white men more accurately.

More Ways AI can be bias

Companies that Joy Buolamwini, from Time Magazine, evaluated had error rates of no more that 1% for lighter skinned men. For darker skinned men on the other hand the errors rose up to 35%. Amazon couldn't even classify Oprah Winfrey as shown in this picture below.

The machines are being fed more information about white men in particular therefore making false judgments about dark-skinned women.

The machines are being fed bias information. There are also many questions on how to fix this information. People are asking whether to give up accuracy to prevent bias information. There are other asking if it is even possible for machines to be unbias because humans are bias people. In my next blog I will be discussing different ways bias could be prevented and different ways people are trying to make machines unbias.

Racial bias in AI machines is costing people their jobs and their lives. Groups of people are being unrecognized and harmed from this. Right now people just have to try to figure out different ways to find a solution for racial bias in AI.

Sources