Bias (AI and human)

I was thinking a little bit further about bias in artificial intelligence systems. It is so easy to demonstrate and I, myself, have shown it in image generation tools with a simple request for a doctor or a nurse in a medical setting; The various bias in the resultant images are very clear indeed. But I think it is also important that we acknowledge the inherent bias in us as humans.   We get by despite this inherent bias yet seem to be unwilling to accept similar bias in AI solutions, so why is this the case?   What is the difference?

I thought a good place to start was to ask ChatGPT for its thoughts regarding the differences in bias between AI systems and humans.    It created a list of sources of bias for AI systems and a corresponding list for humans as below:

AI and Bias

  1. Training Data: AI systems learn from data, and if the training data is biased, the model can inherit and perpetuate those biases.
  2. Algorithmic design: Bias can also be introduced through the design of the algorithms themselves, including the choice of features, learning objectives, and optimization processes.
  3. Unintentional amplification: AI systems may unintentionally amplify and propagate existing biases present in the training data, leading to unfair or discriminatory outcomes.
  4. Lack of consciousness: AI systems lack consciousness and awareness. They don’t possess intentions or beliefs, so any bias they exhibit is a result of patterns learned from data without an understanding of the social context.
  5. Automated decision making: Bias in AI can lead to automated decision-making that affects individuals in various domains, such as hiring, lending, and criminal justice.

Reading the above, items 1 and 3 feel like a repetition, while item 4 begs the question of what is consciousness and how might we be able to evidence it in a future Artificial General Intelligence (AGI) solution?    This is a question I will leave for now or will leave for brighter people than me.  Item 5 is less about the source of bias and more about the impact it can have on humans.

Human Bias

In terms of humans ChatGPT listed the below:

  1. Human bias is often rooted in subjective beliefs, attitudes, and cultural influences. Humans can be consciously biased based on personal experiences, opinions, or societal conditioning.
  2. Humans can exhibit bias with intent, driven by personal beliefs, prejudices, or discriminatory motivations. This may not be the case for AI, which lacks intentionality.
  3. Unlike AI systems that learn from data in a structured manner, humans can adapt, unlearn biases, and consciously strive to be more objective through education and self-awareness.
  4. Humans possess a deep understanding of social, cultural, and contextual nuances, allowing for more sophisticated decision-making that takes into account the complexity of real-world situations. AI systems, on the other hand, may struggle with nuanced or context-dependent scenarios.

Considering A and subjective beliefs, attitudes and cultural difference, arent these the training data we as humans are provided, which shapes our neural pathways and our actions?   This is your upbringing, parenting, friends, local and national culture and values, etc.   We are exposed to this experiential training data throughout our lives, where an AI can be provided similar training data in a far shorter period of time.     Item B then comes from A in the same way as an AI’s bias might come from its training data or algorithmic design.    And I note the design of human beings, as influenced and evolved over time, has resulted in some design features which are sub-optimal in the modern world.  Take for example the fight or flight response kicking in during a heated discussion;   In the past all the relevant hormones released by fight and flight would be used up in the resultant fight or in running away from the teeth and claws of a predator, whereas in the boardroom these hormones have nowhere to go.   Does the boardroom really merit an increase in heartbeat and respiration?  And that’s before I dip into the availability bias, halo effect and a number of heuristic shortcuts we subconsciously use.

Items C and D, in my opinion, provide an overly positive view of us humans and our ability to unlearn bias and show a “deep” understanding.    Yes this may be possible however it isnt easy as humans may be unaware of their bias or bias might play into their perception of their understanding;   Take for example the confirmation bias where we might simply pick the facts or information which aligns which our view, discarding or undervaluing other counter facts or information.

It was at this point I considered AI and Humans and found myself noting the plural humans;  Maybe this is the key.    Humans work together where an AI solution is a single entity and maybe this is where bias diverges in its impact between humans and AI.    If we can gather a diverse group of human individuals this diversity can actively work towards identifying and removing bias.   An AI solution, as a single entity it doesn’t benefit from access to others, it simply takes the prompt and kicks out a response. 

But maybe we could look to multiple AI solutions working together?  Maybe it is a number of AI’s working together, working alongside humans?   I have frequently talked about IA, and AI as an Intelligent assistant, and maybe this is where the answer lies in an AI, with its bias, and a human, with its bias, working together and hopefully cancelled out each other’s bias?

Conclusion

I think its important that anyone seeking to use generative AI is aware of the inherent bias that may exist within such tools.   That said, I think the narrative on AI bias is rather shallow and limited, focusing on pointing out the shortcomings of AI in relation to bias, without considering the bias which exists in ourselves as humans.     I think we need to get more nuanced in our discussions here and look towards how we might address bias in general, whether it be AI or human related.

Social Media – A magnifier on society

Social Media acts as a magnifier on society.   This can both be a good thing and a bad thing.   In a good way it allows the quiet masses to have a voice and to express their opinion.   Before social media these people would not stand up or write an article in a newspaper or otherwise be able to express their views publicly.   Now they can easily like or share those posts they agree with, adding their voice to the message.   And if feeling strongly they can even add their own comments and thoughts reasonably safe in the knowledge that their voice won’t stand out.  We have seen this over the last few days as messages rejecting racism have been liked and shared in their thousands.   Social media has enabled a larger part of the population to contribute to the collective voice online.

But there is a flip side to this.  Social media provides a platform for a minority of people to share inappropriate comments with the masses, including racist views.    Prior to social media these people might have expressed the same racist views in public, but they never had much of an audience and the message never got very far.   Now, with social media, they can share their views instantly with millions of people.   They also feel safe in the knowledge that identifying them, where they have taken precautions, is not easy and therefore their comments are likely without consequence.    Social media has enabled this minority to engage a larger part of the population with their inappropriate messaging.

For me racism has no place in todays society and should be called out and challenged at every opportunity.    

I would however highlight an additional concern in relation to viewing society through social media, through the magnifier of social media, and how this can result in a distorted view on society.    Social media, to me, suggests that racism is more prevalent based on the large number of social media posts calling our racism, and by extension the suggestion of a larger number of racist tweets.   I am not sure, based on my experiences, it is more prevalent.   I suspect the availability bias is playing a part here.   I believe I heard racist comments more frequently when I was younger than I do now, so this might at least suggest we are heading in the right direction, albeit we can never stop until racism has been eliminated.

I also have concerns about the viral nature of social media, which can lead to massive outpourings of support or concern, etc, but for a short period of time, followed by people moving on to the next viral message.    Racism is linked to culture, and culture is changed gradually through consistent changes is behaviours, the stories that are told, etc.    Viral but short-lived messaging is likely to do little to impact culture and the prevalence of racism.  It is only prolonged and consistent changes in behaviour and messaging which will have this effect.   I personally started questioning the taking of the knee at the start of football events, as being a little bit of tokenism, however considering it again, maybe the consistent message conveyed is what we continue to need in the hope of long-term change.

Social media for me, isnt the problem here, but magnifies and possibly distorts it.   I am concerned that in seeking to address the issue at hand, currently racism in particular, we focus on social media and the social media companies.   Yes, they need to do all they can and possibly more than they are doing, but the issue is a societal one not a technology one.    Technology is just making it more visible, but maybe distorting the situation in the process.   

As such I think the key here is greater awareness as to how social media fits into situations like this.   How social media doesn’t just report and share news, but how it’s very use shapes the news and message being shared.   I hope this post maybe contributes a little to this awareness.

AI and Bias

I recently saw an article in the guardian regarding a call from an Artificial Intelligence expert to cease using AI in the UK due to concerns that they were “infected with biases” and couldn’t be trusted (McDonald, 2019).

I too have concerns in relation to bias in AI, particularly in relation to AIs as black box systems where we are unable to ascertain how an AI might have arrived at a specific decision.    For example, the guardian article references immigration related applications of AI, so an AI might decide to approve or reject an immigration application based on the data it has available to it.    The danger here, in my view, is the potential lack of transparency in relation to the AIs decision making process.  

Despite my concerns, I however do not advocate banning AI use, as the alternative to using AI is to use human decision making.    Human decision making is far from lacking in bias.   In Sway (2020), by P. Agarwal, the author states “we are all biased – to a certain degree” going on to discuss in detail human bias and particularly unconscious bias.   Agarwal also states that “we cannot erase our biases completely” plus in relation to technology use, suggests that technology solutions, which therefore includes AI, “incorporate the biases from the designers and data engineers” who design them.   As such it doesn’t seem fair to hold AIs up to a standard, that of being absent of bias, when the human designers, users, etc of such systems are themselves unable to achieve this standard.

For me the critical issue is being aware of the bias which may exist and seeking to mitigate and manage the resultant risks.   We have to accept that bias is unavoidable, it is unavoidable in we humans, and also unavoidable in the systems and AIs we may create.    It is due to this need for awareness that my concern regarding the potential lack of transparency arises.

References:

Mcdonald, H. 2019. AI expert calls for end to UK use of ‘racially biased’ algorithms. [Online]. [27 December 2020]. Available from: https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey

Agarwal, P (2020). Sway: Unravelling Unconscious Bias. United Kingdom: Bloomsbury Publishing.

Availability Bias and the news

Watching the BBC news this morning and I saw a perfect example of the availability bias.   A news anchor pinning down a government representative as to Covid testing stating that people had contacted the programme following issues they had booking a Covid test.   The news anchor used these individual reports as proof of the problems related to getting a Covid test, even citing the specific details of one or two people.

Now I am not pretending that Covid testing is perfect or not in need of improvement but to use the available reports as proof of the failings of Covid testing seems to be a perfect example of availability bias.    The raised issues, being readily available and readily coming to mind, become the proof without considering evidence which isnt as readily available.   Take for example those people who quickly and easily got a test; These people are unlikely to contact a news programme to report their satisfaction.   Or maybe the number of people dissatisfied as a percentage of the number of tests, or the increasing volume of tests, or the testing regimes in Covid hotspot areas versus those on areas not so badly impacted. This data may be possible to gather, however isnt as readily available as a number of reported complaints.

This all reminds me of the story relating to WWII; Originally when looking at bomber planes they would reinforce the areas of planes which were regularly showing as damaged by anti-aircraft (AA) fire as these seemed to be the areas suffering regular hits.    The idea was that by reinforcing these areas the chances of bombers returning would increase, however this didn’t happen.  It was only when someone suggested they look at the areas which returning bombers never showed damage on that they made progress.  The logic here being that the areas which bombers never showed damage on was often due to the fact when these areas were hit by AA fire the bombers simply never returned; The damage was critical.    These were the areas to focus on reinforcing. In this case, the easily available data, damage to aircraft, wasnt as helpful as it at first appeared.

The issue for me with the BBC falling into the availability bias trap is that the BBC are meant to be the bastion of truth, and currently I believe more people than ever are regularly watching the morning or evening news.    That they would report in such a biased way, and therefore potentially propagate a biased viewpoint is concerning.   

As we often focus on social media bias and what Facebook, etc are doing, we maybe need to be careful not to take our eyes of what the old conventional news are doing.