Bias in AI
Our world is full of bias. Everyone has their own set opinions on every topic. In some cases it is needed, but in other cases it can be very harmful, especially in AI (Artificial Intelligence). AI is filled with bias, which can be very harmful to people. Especially people who are not favored by society.
Recently I watched a Ted Talk of an English Game Designer, Mata Haggis Burridge. He mentioned that he was taking a hazard perception test in the UK that was a part of acquiring a driver’s license. After finishing the test he realized that two-thirds of the test had been disqualified. Apparently he had a strange pattern in his clicks, so most of his tests had been disqualified. He later found out that he had clicked on certain answers faster than what the algorithm thought was normal, so the machine thought that he was cheating. This is a small example of how AI could be flawed. Today AI is is affecting billions of lives across different social and economic cross-sections
In order to reduce human bias in the process of recruiting employees, the help of computer science (CS) is taken. But a study found out that algorithms used in CS are just as biased. According to the study the facial recognizing system scans a white male face accurately 99% of the time and it makes a mistake recognizing a dark-skinned female face 35% of the time.
Different Ways AI Can Be Bias
Algorithms are used in courtrooms too. They aid in risk assessments of criminals more likely to commit future crimes. It is found out that algorithms falsely predicted black defendants to be future criminals and at twice the rate of white defendants. Algorithms used in courtrooms too. They aid in risk assessments of criminals more likely to commit future crimes. It is found out that algorithms falsely predicted black defendants to be future criminals and at twice the rate of white defendants. Racial bias is one of the big problems of AI and the algorithms that these machines use. There are many different points of views on this particular topic and people are still trying to reach for a solution.
Another example of bias and discrimination in AI is used in a company we know and love, amazon. Amazon built a resume screener to decide which applicants to interview for different positions in Amazon. It had many biases built. One of them was that is penalized applications that had the word “women’s” in it. Amazon was already used to interviewing more men than women and that data got fed into the AI system causing the system to be more bias towards men.
We can now slowly start grasp that there is a reason to understand the complex interactions between AI and bias. The questions of Algorithmic accountability of the humans who created them, and the date used to create and train AI.
According to Akaveh Waddel, Axios reporter, as the algorithm becomes more complicated, AI and machine learning become more transparent It is difficult to maintain transparency , since humans are really opinionated, but it is important to make sure algorithms are not behaving in a biased way. Without diversity an algorithm would behave in a prejudice way just as biased as a person would.
My sources :Smith, C., 2020. Dealing With Bias In Artificial Intelligence (Published 2019). [online] Nytimes.com. Available at: <https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html> [Accessed 29 November 2020].
Tackling bias in artificial intelligence (and in humans)
The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare…
Amazon Information from https://nikmarda.com/ presentation