From Bias to Fairness: Navigating the Ethics of AI

What is AI Bias? There are possibilities that AI systems produce discriminatory results because of skewed data or flawed design.

AI has conquered our lives, in every aspect possible (even in love life) and is used in hiring, healthcare, finance areas, where biases can have heavy impact on people’s lives.

Let’s take a closer look at studies and facts:

“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” by Joy Buolamwini and Timnit Gebru.

  • Goal: This research investigated how commercial gender classification algorithms perform among various demographic groups, especially when applied to different skin types and genders.
  • Background: Previous research has shown that there AI biases in facial recognition tools and these biases can result into unfair accusations or treatment.
  • The problem: Many facial recognition studies show that there is inadequate representation of diverse ethnic groups, such as those of African rooted.
  • Conclusion: The study provides evidence that AI tools proceed to biases, especially darker- skinned females. There is a need of inclusive databases, detailed subgroup performance reports and strategies to improve the data for fairer algorithms.

“Fairness and Abstraction in Sociotechnical Systems” by Selbst et al.

  • Goal: The existing traps purely technical interventions have when applied in decision making systems without considering the surrounding societal context and how to avoid them.
  • The problem: Systems are like black boxes, and they are influenced entirely by their inputs and outputs. They are not taking into consideration societal matters like employment and criminal justice and treat the concept of fair just like another algorithm.
  • Conclusion: The main proposed solution is the focus on determining where and how to apply a technical solution. The best is to concentrate in process- oriented designs that take into consideration both human interactions and social contexts.

“Ethics of Artificial Intelligence and Robotics” by Vincent C. Müller.

  • Goal: Explore the ethical issues of AI
  • The problem: The ethical matters that occur from AI systems, first as tools that humans have created such as manipulation, bias, privacy and opacity, and second as subjects which have to do with machine ethics.
  • The problem for bias: The concern is that AI will be used very often for prediction as data analytics are used to foresee future developments in healthcare and business results. The fear is that the field of “predicting policy” may be used to such a large extent that deprives human liberties. The dystopian scenery of the movie “Minority Report”, where AI predicted the potential future move of a potential future criminal may come to life! One worry is that AI systems will continue an existing bias from the set up data input, for example an AI sends more police patrols to an area, then more crimes are found in that area and that keeps on going.
  • Conclusion for ΑΙ bias: Veale and Binns (2017) suggest that instead of only relying on technical fixes, institutions should implement broader policies and oversight to ensure fairness, involving collaboration between technical experts and policymakers.

Bias in Recruitment Tools

Amazon discontinued a recruiting algorithm after finding it exhibited gender bias. The issue arose because the algorithm was trained using 10 years of resumes, most of which came from white males. This data led the AI to favor resumes resembling those from Amazon’s male-dominated engineering department. As a result, the algorithm penalized resumes that included the word “women’s” or referenced women’s colleges, reinforcing gender bias and disadvantaging female applicants.

In word references

Princeton University researchers analyzed 2.2 million words and found AI systems with human-like biases. For instance, “woman” and “girl” were more often linked with the arts, while “man” and “boy” were associated with science and math.

In criminal justice

COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm used to assess whether defendants should be detained or released on bail. A ProPublica report found that it was biased against African Americans.

The Need for AI Ethic Policies

So far, gender and race- based biases are often to be found but also unfairness over socioeconomical data input or health disparities have been detected.

As with every human creation, AI has been made with love and aspiration, but also unintentional biases and assumptions. Let alone its own ingredients- aka features and data- that can be by default overemphasized and conclude to prejudice.

That is why an ethical AI policy is more and more vital for the future. Governments and organizations across the globe, like the European Union’ s AI Act, are setting up guidelines for the fair and ethical use of AI.

Do you think it’s possible for ethical AI policies to effectively prevent bias?

Could AI ultimately evolve to teach humans about non-bias and fairness instead of reinforcing existing prejudices?

Leave a comment