Who poisoned the AI?

One of the challenges in relation to Artificial Intelligence solutions is the cyber risk such as that presented through AI poisoning.  When I seek to explain poisoning the example I often use is of an artist who sought to keep traffic away from a particular street.   To do this he simply purchased a number of cheap smartphones, put them in a little trolley and then walked this trolley slowly down the chosen street.    To Google Maps the fact a number of smartphones were progressing very slowly down a street was interpreted as a traffic jam or accident and therefore Google maps sought to redirect people away from the street.   Basically, the individual had poisoned the AI data model to bring about a generally unwanted outcome, at least from the point of view of Google Maps.

Poisoning might take a number of forms, such as through the input data received by the AI such as the position information from the phones, or through the prompts made to a generative AI solution or through the training data provided, including where this might include the prompts.    The key is that the AI solution is being manipulated towards an output that wouldn’t normally be anticipated or wanted.  And there are also concerns from a cyber security point of view in relation to poisoning being used to get AI solutions to disclose data.

That said I previously read an article in relation to AI poisoning but where the poisoning was being presented as a solution to a problem rather than a risk.   In this case the problem is ownership and copyright of image content, where an AI vendor might use such image content, scraped from the internet often without permission or payment to the creator, and used to train the AI.    The concern from copyright owners and artists is that they are creating works of art, images, etc, but as generative AI solutions are fed this data, the AI solution either copies elements of their works, or could even be asked to create new works, but in their style.   And given the creator is receiving no remuneration for the use of their works in training an AI, plus that the AI might lead them to receive less business, they are concerned.   Roll in Nightshade, a solution for poisoning an image.   Basically, what the solution does is to change individual pixels within an image, where this isnt perceptible to the human eye, but where it will influence an AI solution.   The poisoned images therefore negatively impact the functionality of AI solutions which ingest them into their training data, but while still be totally acceptable from a humans point of view.

The above highlights technology and AI as a tool;   Poisoning can be used for malicious purposes but in this case can be used positively to protect the copyright of image creators.    The challenge however is that this technology for poisoning images will likely lead to AI solutions either capable of identifying and discarding poisoned images or AI solutions which are tolerant to poisoned images.   It will end up as a cat and mouse game of AI solutions vendors vs. copyright holders.    This is much like the cat and mouse which is the tech vendors seeking to create generative AI solutions which create near human like content versus the detection tools seeking to detect where AI tools have been used.   Another challenge might be the malicious use of poisoned images to disrupt AI solutions such as the feeding of poisoned images into a facial recognition or image recognition solution in order to disrupt the operation of the system.

I also think it is worth stepping back and looking at us as humans and how poisoning might work on human intelligence rather than artificial intelligence.   One look at social media, one look at propaganda and at the Cambridge Analytica scandal shows us that poisoning of intelligences, such as human intelligence, isn’t something new;  I would suggest fake news is a type of intelligence poising albeit possibly at a societal level.    Poisoning has been around for a while and I am not sure we have a solution.   So maybe rather than looking at how we deal with or positively use the poisoning of artificial intelligence we need to go broader and consider poisoning of intelligence in general, including human and artificial intelligence?  

References

This new data poisoning tool lets artists fight back against generative AI, Melissa Heikkilä (2023), Technology Review, Downloaded 07/11/2023

Berlin artist uses 99 phones to trick Google into traffic jam alert, Alex Hern (2020), The Guardian, Downloaded 07/11/2023

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment