With AI playing a bigger role in everyday operations, it is important to understand the possible bias informing system decisions.

AI bias, also known as machine learning bias, is a prejudice within technological systems which occurs when human opinions skew training data and algorithms to distort outputs. If left unchecked, it can harm minorities and isolate particular groups of people.

 

 

What is AI bias?

According to Associate Professor Niusha Shafiabady from Charles Darwin University’s Faculty of Science and Technology, “bias in AI occurs when the system votes against a specific race or group.”

“For example, if someone from a specific race or group uses an AI system, they may have a higher likelihood of being flagged as a potential criminal when compared to a different race. 

“This issue often occurs when AI is trained by uninformed engineers and, unfortunately, current higher education priorities mean universities are not doing a good job in educating real AI experts.”

AI models inherit the biases present in society. When they absorb information and data, all historically underlying prejudices within that data are absorbed too, changing the way these systems interpret knowledge.

In fact, the National Institute for Health Care Management Foundation (NIHCM) has found that AI used in medical settings draws from data which underrepresents women and people of colour. This means that computer-aided diagnosis (CAD) systems often ignore the social determinants of health and make diagnoses with lower accuracy for these minorities.

In a study from 2021, NIHCM determined that CAD systems gave the same risk score to black patients who were considerably sicker than white patients in America. This racial bias within the algorithm risks advancing racial disparities in healthcare and undermining the ability for patients to receive the care they need.

 

 

How can we prevent this bias?

Preventing AI bias from continuing means monitoring and questioning decisions made by the system.

“We should know about the strengths and weaknesses of AI in order to maintain realistic expectations about the value of AI systems,” says Associate Professor Shafiabady.

“AI isn’t a magical or mystical tool, it is based on mathematical calculations. We don’t want to become too reliant on AI’s predictions.”

When asked about the best ways to stop AI bias, Associate Professor Shafiabady emphasised the need for education and legislation.

“There are many factors leading to inaccuracies in AI’s prediction outcomes. These include improper or unrelated training data, or data which is not up-to-date. The best way to avoid these issues is having knowledge and experience to train the AI’s decision-making brain properly. 

“Similarly, we need legislation to control the use of technology. Unfortunately, since monitoring technology does not align with the profit desires of tech giants and big bankers, it will not be easy to implement this legislation. Now is the time to start thinking carefully about the future of technology and its role in our society.”

However, it’s not all bad news. Experts believe there is a place for AI in our technological future, as long as it is carefully monitored and regularly assessed.

“I personally hope that technology becomes a tool used to assist us in different aspects of our lives. By remaining vigilant and making smart decisions, we can ensure that AI benefits all communities,”  Associate Professor Shafiabady concludes.