What does the future for schools and AI look like (The risks and challenges)?

My last post looked at the future of schools now we have widespread use of generative AI, taking a generally positive viewpoint.   This post will now reverse this looking at the challenges and risks and taking a more pessimistic stance on AI in education.

Personalised Learning, for those with access

AI does have great potential in schools and in education however I feel it will highlight the already existing digital divide.  Not all students have access to a device with which to use AI tools.   Not all schools allow generative AI such as ChatGPT to be accessed plus schools have varying degrees of IT infrastructure and support.    Additionally, some schools will be more forward looking and will already be talking to staff and students about AI and generative AI, while others will have yet to have broached or even considered the subject.     As such the ability to access, to understand and to use AI positively will be varied and not uniform for all individuals.    AI might therefore serve to widen the digital technology gap which already exists, with those with access to technology, infrastructure and support benefitting from the personalised learning of AI, while those without languish further and further behind.

Lacking diversity

We also need to consider how AI works and the training data it has been provided.    Much of the AI development will have happened in the western world, where technology staff still are more often English speaking, male and white.    This creates a bias in the resulting AI solutions with this bias being widely reported.   Fundamentally our current generative AI uses its training data to generate the output it creates, with this training data largely coming from the internet itself and with the process based on statistical modelling.  This results in AI outputs which tend towards an average or statistically probable output based on the available training inputs.   How does this then impact on those who may stray from this statistically average person or response?    A perfect example of this lies in an email I saw shared on social media (see on twitter here) where an individual responded indicating they would prefer a human rather than an AI generated response.   It turns out the email was human generated and the original sender proceeded to indicate “I’m just Autistic”. 

What might the broader impact of AI, trained to output based on an average person or response, be on those who are neuro-diverse and therefore differ from the average or norm?     I will admit this issue isnt new and can be seen in the Google search engines AI based responses.   It tends to towards the common answers, those that in probability are most likely to be those favoured by users in general.   The less common or more divergent answers, opinions and views are buried further down the search responses, likely on later pages where generally few people ever go.   But remember your search engine provides a list of links to look at, whereas generative AI tends to provide but a single response to consider in the first instance.  So the issue may be more evident where generative AI is in use.

A collapsing model / Homogeneity

Another challenge in relation to AI is its need for more and more training data to improve.   The issue will be that generative AI will become increasingly responsible for the content being published online which in turn is commonly used for the training of AI solutions.   We will have a situation where AI solutions generate content, which is then ingested as training material by AI leading to more content.   And as AI starts to learn from itself, and in relation to generative AI’s tendency to move towards an average response, the AI models may weaken and fail.   It’s a bit like genetics and having a limited gene pool leading to limited diversity and lack of ability to adapt to environmental change.     This in turn could proceed to only deepen the issue of AI solutions lack diversity in their ability to support a diverse user base.  

Blackbox AI

The black box nature of AI solutions is yet another issue which could be considered a risk.   AI solutions are generally black box solutions so we cannot see how they arrive at their final output.   This means that we may be blind to bias, inaccuracies, etc which exist within an AI solution as the result of the training data is has had access to.   AI platforms may be constantly reviewing student activity, and from this may categorise students in ways we don’t understand, then responding with learning content influenced by bias intrinsic to the training data.    From the point of view of schools, where a students learning, future learning and future lives are at stake it represents a concern and risk if we are unable to understand why a particular student was provided a certain learning path over other options.     What if the AI, through bias, identifies a student as lower ability and therefore proceeds to offer low challenge content?

Convenience and laziness

As a benefit, AI solutions like ChatGPT can make things easier as we can easily get a generative AI solution to draft an email or a document;  It is simply more convenient, faster and requires less effort on our part, but the risk is that we become lazy through this.   There is already a bit of a panic about students simply using generative AI to create their coursework and homework for them.   We may also become overly reliant on these solutions for our answers and less able to think for ourselves, we may also become less likely to stop and question the responses we receive.   And guess what, this isnt new either.   We already see this in social media where I recently saw a post based on an article which referenced a piece of research.   On social media some individuals jumped on the content of the article and what it said about the findings of the research, but upon further inspection the research made no such findings.   Convivence in accepting the summary of the findings, from the article, had overtaken proper checking of the source material to check the summary was correct.     And with AI solutions becoming more common, and even supporting the creation of video, we likely need to be more questioning now than we have ever been in the past.    But maybe there is an opportunity here if the convivence frees up time, which is then used to allow us to be more questioning and critical;   I suspect this is me being hopeful.

Data Protection

The DFE guidance states that schools should not be providing personal data to AI solutions.   This is due to the risk in relation to AI and data protection.    If the AI learns from its inputs, then it might be possible to provide prompts which therefore surface these inputs;  So if we entered student data such as exam results, it might be possible for someone else to write a prompt which would then result in the AI providing them with this data, even if they have no legal right to access this data.    There is also a similar risk, if the prompts and data we provide forms part of the overall training data, that a data breach of the AI vendor would result in the data being leaked to the dark web or being otherwise used by criminals.

We also need to consider the long term safety of our students.   If an AI vendor has large amounts of data on students is there potential for the vendor of this data to share or use the data in a way that is not in line with our expectations as educational establishments.   What if the data is sold to an advertising company to help better target students in relation to marketing campaigns, even providing individualised targeted adverts based on data gained as they worked through AI learning content?    What if the data is used by governments to target members of society that don’t fit their blueprint for the ideal citizen?    Am thinking about Orwell’s 1984 here, which may be a bit of a stretch, but if we are providing student data to AI solutions, or putting students in front of AI solutions we expect them to use, how are we making sure their data is protected.

Conclusion

I have tried to avoid the “beware AI will take over the world” and/or “AI will kill us all” message.   I have tried to focus on education and some of the things we need to consider in our schools.   The reality is that AI is here today and will only get better, that it has many potential advantages however there are also risks and concerns we need to be conscious of.   We cannot however be so worried that we sit, discuss and legislate, as by the time we have done this, AI solutions will have already moved on.  

For me, we need to engage with AI solutions in schools, seek to shape the positive use, while being aware and conscious of the risks and challenges that exist.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment