
Meta released a chatbot for use in the US where its responses are based on internet based data. It wasn’t long before the chatbot was being less than positive about Meta’s CEO Mark Zuckerberg. Overall, a bit of a novelty but it might also give us a little bit of insight into the Artificial Intelligence or Machine Learning algorithms which underpin an increasing number of the services we use online.
It is highly unlikely that Meta specifically programmed their chat bot to suggest that the CEO did “a terrible job in testifying before congress” however this is the feedback it provided upon being asked “what did you think of mark Zuckerberg”. This response is likely the result of the chatbot analysing data sources on the internet and identifying this response as most likely to be true, or at least true in the perceptions of those sharing their thoughts online. So here we see a couple of problems:
- As users and even developers, we will not necessarily be able to identify how the response was arrived at. It’s a black box system; We can see the inputs and the outputs but not the process. Considering this should make us a little bit nervous as, especially for important decisions, it would be nice to understand how the answer an algorithm provides was arrived at. Imagine an AI being used in assessing mortgage applications; How would you feel if no-one can example why your application was refused? From a user point of view, as a black box system, there is also the danger that the service provider does have control over the algorithm and therefore can directly influence and control feedback to suit their own needs. In this case the black box system provides a smoke screen for potentially unethical practices.
- The chatbot repeats what it sees to be true or the commonly held belief, based on the data sources it accesses. Bias could easily be introduced here through the internet sources which the chatbot is provided access to or through the queries it might use in identifying pertinent information. We should be naturally questioning of a solution which may be inherently biased. One example of this is the issues surrounding facial recognition where the AI was trained largely on white rather than coloured faces, due to the predominant skin colour among those developing the AI solution. As such we ended up with AIs which did a poorer job of facial recognition when presented with faces with non-white skin colour.
- Again, relating to the repetition of commonly held belief, the chatbot may simply act as an echo chamber for commonly held beliefs, disregarding minority views. And if a number of chatbots were to be used together they might be able to powerfully shape the truth on social media channels through repeatedly posting.
Some of the above is of concern but then I start to think about the alternative and a human rather than AI based system. Humans are not transparent in their thinking processes although they might seek to explain how they arrived at a solution, we rely on sub-concious influences and decision making processes to which we have no access. Humans equally like an AI based system may be biased or may seek to service their own needs or the needs of their employer. And humans also tend towards the likeminded, which therefore creates the echo chambers mentioned above. So maybe AI is no more problematic than a human based solution.
Is the challenge therefore that AI is technology rather than a human being like us? Is it maybe that this difference may influence our feeling of unease or unhappiness with the risks mentioned above, and that we simply accept similar issues in human based processed because, after all, we are “only human”?