Should AI be held to higher standards than humans?

Darren White posted an interesting question on twitter the other day in relation to the standards we hold AI to.    Should AI be held to higher standards than humans? This is something I have been given some thought to due to both having an interest in human heuristics and bias, plus an interest in artificial intelligence. 

Discussions on AI

There is already a lot of discussion regarding issues and challenges related to AI including discussion of bias and inaccuracy or “hallucinations”.    I myself have been able to recreate these two issues reasonably easily within generative AI solutions, firstly asking an image generation solution to create a picture of a nurse in a hospital setting and then a doctor in a hospital setting.   In this case the images were all of white individuals with the nurses all female and the doctors all male.    The evidence of bias was clear to see.    And in a separate experiment with a tool to help with report writing, the developer forgot to provide any data in relation to the fictitious student for which a report was being created but the tool simply made the report content up.    These issues are therefore clear to see and it is easy to jump to a standpoint where bias needs to be removed and inaccuracies or hallucinations stopped.

A human view

One of the issues here is that I believe we need to take a cold hard look at ourselves, at human beings and how we might respond to prompts if such prompts were direct at us rather than an AI.   Would we fair so much better than and AI?    I have a lovely poster in my office in relation to the cognitive biases which impact on human decision making and there has been plenty written about this and heuristics, with Daniel Kahneman’s book, Thinking, fast and slow, being one of my favourites.   A key issue here is that we are often not aware of the internal or “fast” bias which impacts on us and therefore may assess our biased decisions as being absent of bias.     In terms of hallucinations, again we humans suffer the same issue often stating facts based on memory, and holding to these facts even when presented with contradictory evidence.   The availability and confirmation biases may be at play here.    Another challenge when comparing with AI is that our biases and hallucinations are not clear for us to see, albeit they may be clear to others, yet with AI bias and hallucinations, at least in the form of those raised as examples, it is clear for all to see.  

End point?

I would suggest that in both AI and in human intelligence our ideal would be to remove bias and inaccuracy.   I would also suggest although this is a laudable aim it is also impossible.    As such, rather than focussing on the end we need to focus on the journey and how we might reduce the bias and reduce the inaccuracies both in humans and in AI.    It may be that in reducing bias in humans this may benefit AI, however it may also be possible that things work the other way and discoveries to help reduce bias in AI may help with bias in humans.   I note that a lot of human thinking, especially our fast thinking, can be reduced to heuristics or “generalisations” or “rules of thumb”;  How is this much different to the quick processing of an generative AI solution?  Does generative AIs probabilistic nature not tend towards quick creation of generalisations but based on huge data sets?

The future

So far, I have avoided getting pulled into the future and artificial general intelligence and I mention it for completeness only.   This will likely arrive in the future and most who claim to be AI experts seem to agree with this however there is much disagreement as to the when.   As such our immediate challenge is that of the generative AI we have now and its advancement over the creation of an AI solution capable of more generally out thinking us across different domains;  That said I would suggest that in a number of ways generative AI can already out perform us across many domains.

Conclusion

So back to the question in hand and whether we should seek to hold AI up to higher standards?    We should seek to avoid outcomes which have a negative impact on humankind so bias and inaccuracy and also the other challenges in relation to intelligence, such as equality of access to education, etc, are all things we should seek to reduce.    This I think is a common aim and can be applied to humans and AI.   In terms of the accepted standard, I think it is currently difficult to hold AI up to a higher standard than we hold humanity given the solutions are created by humans, trained on human supplied data and used by humans.   It may be that in AI solutions you get a glimpse of how entrenched some of our human biases actually are.   That said I also think it might be easier to remove bias and inaccuracies in an AI solution as compared to doing the same with a human;  I doubt the AI will seek to hold onto its position or to counter argue a view point, at least not yet.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment