
I was thinking a little bit further about bias in artificial intelligence systems. It is so easy to demonstrate and I, myself, have shown it in image generation tools with a simple request for a doctor or a nurse in a medical setting; The various bias in the resultant images are very clear indeed. But I think it is also important that we acknowledge the inherent bias in us as humans. We get by despite this inherent bias yet seem to be unwilling to accept similar bias in AI solutions, so why is this the case? What is the difference?
I thought a good place to start was to ask ChatGPT for its thoughts regarding the differences in bias between AI systems and humans. It created a list of sources of bias for AI systems and a corresponding list for humans as below:
AI and Bias
- Training Data: AI systems learn from data, and if the training data is biased, the model can inherit and perpetuate those biases.
- Algorithmic design: Bias can also be introduced through the design of the algorithms themselves, including the choice of features, learning objectives, and optimization processes.
- Unintentional amplification: AI systems may unintentionally amplify and propagate existing biases present in the training data, leading to unfair or discriminatory outcomes.
- Lack of consciousness: AI systems lack consciousness and awareness. They don’t possess intentions or beliefs, so any bias they exhibit is a result of patterns learned from data without an understanding of the social context.
- Automated decision making: Bias in AI can lead to automated decision-making that affects individuals in various domains, such as hiring, lending, and criminal justice.
Reading the above, items 1 and 3 feel like a repetition, while item 4 begs the question of what is consciousness and how might we be able to evidence it in a future Artificial General Intelligence (AGI) solution? This is a question I will leave for now or will leave for brighter people than me. Item 5 is less about the source of bias and more about the impact it can have on humans.
Human Bias
In terms of humans ChatGPT listed the below:
- Human bias is often rooted in subjective beliefs, attitudes, and cultural influences. Humans can be consciously biased based on personal experiences, opinions, or societal conditioning.
- Humans can exhibit bias with intent, driven by personal beliefs, prejudices, or discriminatory motivations. This may not be the case for AI, which lacks intentionality.
- Unlike AI systems that learn from data in a structured manner, humans can adapt, unlearn biases, and consciously strive to be more objective through education and self-awareness.
- Humans possess a deep understanding of social, cultural, and contextual nuances, allowing for more sophisticated decision-making that takes into account the complexity of real-world situations. AI systems, on the other hand, may struggle with nuanced or context-dependent scenarios.
Considering A and subjective beliefs, attitudes and cultural difference, arent these the training data we as humans are provided, which shapes our neural pathways and our actions? This is your upbringing, parenting, friends, local and national culture and values, etc. We are exposed to this experiential training data throughout our lives, where an AI can be provided similar training data in a far shorter period of time. Item B then comes from A in the same way as an AI’s bias might come from its training data or algorithmic design. And I note the design of human beings, as influenced and evolved over time, has resulted in some design features which are sub-optimal in the modern world. Take for example the fight or flight response kicking in during a heated discussion; In the past all the relevant hormones released by fight and flight would be used up in the resultant fight or in running away from the teeth and claws of a predator, whereas in the boardroom these hormones have nowhere to go. Does the boardroom really merit an increase in heartbeat and respiration? And that’s before I dip into the availability bias, halo effect and a number of heuristic shortcuts we subconsciously use.
Items C and D, in my opinion, provide an overly positive view of us humans and our ability to unlearn bias and show a “deep” understanding. Yes this may be possible however it isnt easy as humans may be unaware of their bias or bias might play into their perception of their understanding; Take for example the confirmation bias where we might simply pick the facts or information which aligns which our view, discarding or undervaluing other counter facts or information.
It was at this point I considered AI and Humans and found myself noting the plural humans; Maybe this is the key. Humans work together where an AI solution is a single entity and maybe this is where bias diverges in its impact between humans and AI. If we can gather a diverse group of human individuals this diversity can actively work towards identifying and removing bias. An AI solution, as a single entity it doesn’t benefit from access to others, it simply takes the prompt and kicks out a response.
But maybe we could look to multiple AI solutions working together? Maybe it is a number of AI’s working together, working alongside humans? I have frequently talked about IA, and AI as an Intelligent assistant, and maybe this is where the answer lies in an AI, with its bias, and a human, with its bias, working together and hopefully cancelled out each other’s bias?
Conclusion
I think its important that anyone seeking to use generative AI is aware of the inherent bias that may exist within such tools. That said, I think the narrative on AI bias is rather shallow and limited, focusing on pointing out the shortcomings of AI in relation to bias, without considering the bias which exists in ourselves as humans. I think we need to get more nuanced in our discussions here and look towards how we might address bias in general, whether it be AI or human related.