What does the future for schools and AI look like (The risks and challenges)?

My last post looked at the future of schools now we have widespread use of generative AI, taking a generally positive viewpoint.   This post will now reverse this looking at the challenges and risks and taking a more pessimistic stance on AI in education.

Personalised Learning, for those with access

AI does have great potential in schools and in education however I feel it will highlight the already existing digital divide.  Not all students have access to a device with which to use AI tools.   Not all schools allow generative AI such as ChatGPT to be accessed plus schools have varying degrees of IT infrastructure and support.    Additionally, some schools will be more forward looking and will already be talking to staff and students about AI and generative AI, while others will have yet to have broached or even considered the subject.     As such the ability to access, to understand and to use AI positively will be varied and not uniform for all individuals.    AI might therefore serve to widen the digital technology gap which already exists, with those with access to technology, infrastructure and support benefitting from the personalised learning of AI, while those without languish further and further behind.

Lacking diversity

We also need to consider how AI works and the training data it has been provided.    Much of the AI development will have happened in the western world, where technology staff still are more often English speaking, male and white.    This creates a bias in the resulting AI solutions with this bias being widely reported.   Fundamentally our current generative AI uses its training data to generate the output it creates, with this training data largely coming from the internet itself and with the process based on statistical modelling.  This results in AI outputs which tend towards an average or statistically probable output based on the available training inputs.   How does this then impact on those who may stray from this statistically average person or response?    A perfect example of this lies in an email I saw shared on social media (see on twitter here) where an individual responded indicating they would prefer a human rather than an AI generated response.   It turns out the email was human generated and the original sender proceeded to indicate “I’m just Autistic”. 

What might the broader impact of AI, trained to output based on an average person or response, be on those who are neuro-diverse and therefore differ from the average or norm?     I will admit this issue isnt new and can be seen in the Google search engines AI based responses.   It tends to towards the common answers, those that in probability are most likely to be those favoured by users in general.   The less common or more divergent answers, opinions and views are buried further down the search responses, likely on later pages where generally few people ever go.   But remember your search engine provides a list of links to look at, whereas generative AI tends to provide but a single response to consider in the first instance.  So the issue may be more evident where generative AI is in use.

A collapsing model / Homogeneity

Another challenge in relation to AI is its need for more and more training data to improve.   The issue will be that generative AI will become increasingly responsible for the content being published online which in turn is commonly used for the training of AI solutions.   We will have a situation where AI solutions generate content, which is then ingested as training material by AI leading to more content.   And as AI starts to learn from itself, and in relation to generative AI’s tendency to move towards an average response, the AI models may weaken and fail.   It’s a bit like genetics and having a limited gene pool leading to limited diversity and lack of ability to adapt to environmental change.     This in turn could proceed to only deepen the issue of AI solutions lack diversity in their ability to support a diverse user base.  

Blackbox AI

The black box nature of AI solutions is yet another issue which could be considered a risk.   AI solutions are generally black box solutions so we cannot see how they arrive at their final output.   This means that we may be blind to bias, inaccuracies, etc which exist within an AI solution as the result of the training data is has had access to.   AI platforms may be constantly reviewing student activity, and from this may categorise students in ways we don’t understand, then responding with learning content influenced by bias intrinsic to the training data.    From the point of view of schools, where a students learning, future learning and future lives are at stake it represents a concern and risk if we are unable to understand why a particular student was provided a certain learning path over other options.     What if the AI, through bias, identifies a student as lower ability and therefore proceeds to offer low challenge content?

Convenience and laziness

As a benefit, AI solutions like ChatGPT can make things easier as we can easily get a generative AI solution to draft an email or a document;  It is simply more convenient, faster and requires less effort on our part, but the risk is that we become lazy through this.   There is already a bit of a panic about students simply using generative AI to create their coursework and homework for them.   We may also become overly reliant on these solutions for our answers and less able to think for ourselves, we may also become less likely to stop and question the responses we receive.   And guess what, this isnt new either.   We already see this in social media where I recently saw a post based on an article which referenced a piece of research.   On social media some individuals jumped on the content of the article and what it said about the findings of the research, but upon further inspection the research made no such findings.   Convivence in accepting the summary of the findings, from the article, had overtaken proper checking of the source material to check the summary was correct.     And with AI solutions becoming more common, and even supporting the creation of video, we likely need to be more questioning now than we have ever been in the past.    But maybe there is an opportunity here if the convivence frees up time, which is then used to allow us to be more questioning and critical;   I suspect this is me being hopeful.

Data Protection

The DFE guidance states that schools should not be providing personal data to AI solutions.   This is due to the risk in relation to AI and data protection.    If the AI learns from its inputs, then it might be possible to provide prompts which therefore surface these inputs;  So if we entered student data such as exam results, it might be possible for someone else to write a prompt which would then result in the AI providing them with this data, even if they have no legal right to access this data.    There is also a similar risk, if the prompts and data we provide forms part of the overall training data, that a data breach of the AI vendor would result in the data being leaked to the dark web or being otherwise used by criminals.

We also need to consider the long term safety of our students.   If an AI vendor has large amounts of data on students is there potential for the vendor of this data to share or use the data in a way that is not in line with our expectations as educational establishments.   What if the data is sold to an advertising company to help better target students in relation to marketing campaigns, even providing individualised targeted adverts based on data gained as they worked through AI learning content?    What if the data is used by governments to target members of society that don’t fit their blueprint for the ideal citizen?    Am thinking about Orwell’s 1984 here, which may be a bit of a stretch, but if we are providing student data to AI solutions, or putting students in front of AI solutions we expect them to use, how are we making sure their data is protected.

Conclusion

I have tried to avoid the “beware AI will take over the world” and/or “AI will kill us all” message.   I have tried to focus on education and some of the things we need to consider in our schools.   The reality is that AI is here today and will only get better, that it has many potential advantages however there are also risks and concerns we need to be conscious of.   We cannot however be so worried that we sit, discuss and legislate, as by the time we have done this, AI solutions will have already moved on.  

For me, we need to engage with AI solutions in schools, seek to shape the positive use, while being aware and conscious of the risks and challenges that exist.

What does the future for schools and AI look like?

I have previously written about the future and cyber security for schools, so I thought it might be equally useful to consider Artificial Intelligence (AI) and schools and what the future might look like given we now have all of these generative AI tools available at our fingertips and the fingertips of our students.

Personalised Learning (for students and staff)

This for me is the key advantage in that an AI solution, gathering data on a students every interaction with online learning content, can then provide individualised feedback to that student.   The current classroom model of a teacher and a number of students, suggests that each students only get a fraction of the available teacher time no matter what strategies are employed.   But with an AI solution, each student would get the full attention of their own online AI based tutor.   Khan Academies Khanmigo gives a taste of what this might look like.   Now the likelihood is this will first impact on the core subjects such as Maths, English and Science, the subjects for which there are already a large number of learning platforms, with inbuilt content available, albeit without the AI personal tutor element.    After this I suspect we will see its growth into the other subject areas however at a slower rate.

And why should this personalised experience be limited to students.   Couldn’t it also provide personalised professional development content, curate research materials on pedagogy based on your interests, link you up online with colleagues in others schools for support and ideas, etc, providing regularly updated recommendations to help your professional development journey.   

A personalised learning experience may also free up some time within the curriculum, plus free up teachers, to focus on the things which AI is not able to address.    This might therefore allow for more discussion as to the impact of technology on our lives, looking at digital citizenship.   It might provide time to consider and discuss human characteristics and issues such as wellbeing, mental health, equality and diversity and resilience to name but a few.  

Personalised AI based learning will also enable real time feedback to parents, giving a much more detailed and regularly updated report on a students progress, their strengths and areas for development.    In turn this will help with teacher workload as this will reduce the need for the regular writing of reports which are sent home, thereby freeing teachers up to focus on teaching and learning.

Personalised Learning for students with special educational needs

Linked to the above is the potential for students to receive additional AI based support where they have special educational needs.    Through looking at the data associated with students interactions with a learning platform, an AI solution might be able to highlight possible learning needs at an earlier stage than a teacher may be able to do;   This is simply due to the Ais ability to focus on each individual student plus the wide variety of data it would have access to.    Upon identification an AI platform might then be able to appropriately provide advice, guidance and additional support in line with the students needs.    And this would be available to every student.    This is one significant advantage of AI within education, the ability for it to scale up personalised 1:1 learning content and support.

Creativity

We have long talked about creativity;  I remember it being one of the 4 C’s although there are now more than 4 C’s.    The issue has, in my experience, been the difficulty in convincing students as to their creativity.   They might have idea but find difficulty translating these into the reality of a piece of written text, an image or graphic, an animation, video or some other output.    Through AI tools the power of creativity is now easily in every students hands.    Not sure of what to include in a script, ChatGPT can help.   Need an image but am not that good with artwork, then Midjourney can help.   And the same for video content, audio, music, programming code, programming code and many other areas.    Through the use of AI tools every student can exercise the creativity of their imagination.    As I heard Dan Fitzpatrick describe it, AI will “democratise creativity”.

Questioning

I think one area which AI will help us build in relation to education is the art of questioning.   Generative AI by its very nature requires questioning or prompting.   AI outputs allow for the creation of realistic images, audio or video, which this therefore requires us to more often question what we read, see or hear, especially when accessed via social media and the internet.    I note that our conventional media is equally guilty of simplistic reporting and presenting a biases picture, it is just the social media does it with a greater volume of content and 24/7 unlike the scheduled national news broadcasts and daily newspaper prints.   Questioning, being inquisitive and constructively critical, debating and navigating complex and confusing problems will become increasingly important therefore schools will need to spend more time working on this with their students.

What I havent mentioned

There are a couple of AI benefits which I havent mentioned above largely because they are already here and have been for some time, however in the interest of completeness I will mention briefly.  Firstly, tools to help students where English is not the primary language.   There are already tools to help with translation of text, such as Google Translate, and also with translation of spoken content such as displaying subtitles in a students native language as a teachers works through a PowerPoint slide deck.   

Another area is grading and marking;  We have long had tools to allow the automatic marking of multi-choice tests however increasingly we have seen the ability to mark written responses against a marking criteria or rubric.   This will only continue with further opportunities for AI based automation being identified on an ongoing basis to help teachers and students.

Conclusion

There are likely many more ways AI will impact on education especially if you start to look beyond the 5 to 10 year mark and consider more general AI as opposed to the narrower AI and generative AI we have now.   At this point I don’t feel confident enough to even propose what education might look like at this point as it may be indistinguishable from what we see as education today.   That said education also tends to be slow to change, and any significant change would require everyone to get onboard, students, parents, teachers, schools, government, inspection regimes, exam boards, employers and may more.   As such I suspect there may be an amount of “kicking and screaming” as educational change comes to pass.   With two such significant catalysts for such change in the pandemic and now the sudden ease of access to generative AI, I feel that change is all but inevitable.

There are some pitfalls and challenges in relation to AI and education however I will pick these up in my next post.    For now though lets conclude on the fact that AI is here now, and will only get better.   We, those in schools, need to shape the solutions and its use while campaigning for appropriate guidance, frameworks and regulation of what, how and when AI should be used in schools.   We cannot however wait for the regulation to occur, as by the time it does the technology will already have moved on.   

Ethical AI

I have seen lots of posts talking about the “ethical” use of AI, and I have shared a few short posts on it myself, however feel it is about time for a slightly longer post having had a bit longer to think about the subject.

Ethics

Firstly lets just take a definition for ethical:

“Ethical comes from the Greek ethos “moral character” and describes a person or behavior as right in the moral sense – truthful, fair, and honest. Sometimes the word is used for people who follow the moral standards of their profession”

Vocabulary.com (as at 14/07/2023)

So for something to be ethical it needs to be truthful, fair and honest.   At first glance this all seems pretty simple in who would want things to be false, unfair or dishonest?   Well first thing to accept is that we will have some bad actors using AI who are not bothered in ethics, simply looking for money, to cause disruption, etc;  e.g. cyber criminals using AI tools for example.   But we will get to these people in a later post, just to keep things simpler for now.  Lets assume for a minute that everyone is wanting to be ethical and to behave appropriately.

Ethics according to who

This is where we hit our first big problem in my eyes.   Lets take truthfulness in the first instance;   In some domains this might be simple to achieve, identifying what is demonstrably true (e.g. basic maths) however in our ever more complex and nuanced world it is becoming increasingly difficult to show what is true.    I knew the truths that there were 9 planets in the solar system and that there was research underpinning learning styles (VAK) theory however in both cases these truths have since proven to be incorrect.   The truth changed.   And that’s before we get to politics, religion and football where everyone’s own truth may be slightly different, with everyone able to provide the evidence which, for them supports their own truth.    So what is the truth and who will be responsible for defining it?    And how will an AI scraping the internet for training data be able to tell the difference between yesterdays truth and todays new truth, my truth and your conflicting truth?    Fairness and dishonesty suffer similar issues.

But this isnt new

The above isnt however limited to technology and AI, it is a societal issue.   When having a discussion, when presenting or sharing ideas, purchasing and not stealing, we need to make decisions based on what is ethical, right, truthful, fair and honest, and where there might be differing views on what is right, truthful and fair.    How do we deal with this in our day to day lives?   Surely we must deal with it as otherwise life would grind to a hal.

The answer, in my opinion, is that we have laws that govern us plus we also have our own moral compass.   We have defamation, equality and discrimination law for example.    But we also have our own moral compass built from our experience and upbringing.   My mother often stated to “do unto others as you expect other to do unto you” for example. For most this seems to work.

A world of AI and ethics

In a world of generative AI we still have the same laws regarding defamation, etc, so this seems to provide the same guidance in relation to ethics as provided to our wider existence in society, and we also have the same old inner compass.   I, as a user of AI, should take care that content I share is truthful, fair and honest.      The vendor providing the AI tool should do what they can to ensure that their tool is as truthful, fair and honest as possible, but where we need to accept this is difficult given changing truth and differing views on fairness and honesty;  So in the face of these difficulties we fall back on the laws and our inner ethical compass.  

I suspect the key issues here are in relation to vendors of AI solutions and how they make sure their platforms are fair, truthful and honest in the face of use across national borders with differing beliefs on fairness and with differing laws, and in the face of content being posted online where there may be accidental or purposeful malicious intent.   But this challenge is no different than the very same challenge faced with social media.

The second challenge is the interface between AI and the user.    If a user uses an AI solution to create an image and submits it to a competition, where the image is identified to have clear commonalities with a piece of copyrighted work, who is responsible for the copyright infringement?   Is it the AI vendor for including the content in their training data or is it the user for presenting the content as original?    I suspect I will post a bit further on this discussion shortly, however I previously talked briefly about it in my article in EdExec magazine here.

Conclusion

The above two issues are the main issues I see in relation to AI and ethics;  Other than this I am not sure what it is we are discussing.   And even here is it about ethics at all, or simply about establishing who is responsible for content coming out of generative AI solutions.   How responsible are the owners of the AI solution and what responsibility does an AI user have for the content they create and use?   

Ethics sounds like a good discussion point:  Lets talk ethics and AI.    But does it mean much?   Its too broad, and much the same as the discussion of AI is a little on the broad side.    If discussing ethics and AI, why not discuss technology and ethics or even just ethics itself?

I think we need to stop talking about ethics in general, in relation to AI, and get to the specifics of our concerns.   Is it concerns for bias, error, mis-use, etc?    And in what way is this concerning?     Is it that bias in training data sets will lead to homogenised responses which therefore would be unfair and discriminatory against minority groups?   And if this is the issue how is the addressed through the framework of current discrimination law and what shortfalls exist in a world of generative AI?

Lets drop the “ethics” soundbite in our discussion of AI; Of course we want use to be fair, truthful and honest, much as we want everything else in life to be fair, truthful and honest, but what specifically does this mean and how can we adjust the current frameworks in relation to fairness, truthfulness and honesty, to encompass AI.

A [broad] review of 2022/23

And so, the 2022/23 academic year draws to a close so I thought I would share some brief thoughts and reflections on the completion of yet another year.   This is the first of two posts I will share in terms of reflection, this one focussing on broad reflections where the future post will focus much more on some of my more personal reflections.   

So what are my main takeaways from the last academic year?

Technology

This year I managed to get around and visit a couple of different subjects and see how others were using technology in their lessons.   It was great to see how technology was embedded and almost transparent in its use, but also heartening to see where teachers were deciding to use non-tech solutions in their lessons where this better met the needs of the students and the lesson content.   Technology is a brilliant tool but equally we need to reach a balance in its use and be willing to use or not use technology as appropriate.   My sense post pandemic was that there was a real danger of a rubber band effect on technology use, seeing technology usage quickly regress to pre-pandemic levels however this does not quite seem to be the case.    That being said, looking outside of schools to exam boards and other centralised education functions, they have moved very little and I suspect this will be the most significant challenge for education;   Schools themselves are able to move forward and progress in the use of technology however for education to move as a whole will require bodies such as exam boards, government education departments, inspections regimes, etc to move forward with their adoption of technology.   It will also require them to ensure that their staff, including those who visit schools to carry out moderation, to carry out inspection or to provide support or consultancy, all have a reasonable level of technology skills.  My experience to date suggests both the technology adoption and the technology skills are currently lacking.

Cyber

Linked to the above I have seen exam bodies providing software for use in educational establishments where the software required local admin privileges.   I have also seen hardware sold where the operating software provided only supported an outdated version of a networking protocol, rather than the newer more security version.  No update was available with the only solution the vendor could suggest being to purchase their newer, more advanced and unsurprisingly more expensive hardware option.   In order for schools to be better protected against the increasing cyber threats of the world we live in, we need to ensure we do the basics which includes limiting the permissions provided and using “least privilege” as a standard, as well as ensuring updates are available and that the newest protocols and standards are used.   Those organisations and companies providing software and hardware to schools need to ensure that cyber security is baked into their solutions by design and where it is not these solutions should simply be prevented from use in our schools and colleges.   Until we address the issue of EdTech software and hardware being designed with security in mind, both in terms of the current issues but also in terms of future risks and issues through the anticipated lifespan of the solution, education will continue to be an easy target.

Generative AI

Generative AI has really hit the news particularly in the last 6 months.  I have already written a fair amount on Generative AI however my main takeaway from the year is that generative AI is here and will only get better.  As has been said a few times, the current AI solutions available are the worst they will ever be.    We therefore need to shape its use by experimenting and identifying how it can help teachers, students and the wider school community.  Equally we must ensure that those using AI understand the risks and implications of its use.    This can be done in a pragmatic way focussed on the present, however we must also look to the future and how AI might significantly change the world of education.   Will teaching and learning look the same as it does in 5 or 10 years time?   What about assessment and exams?   Will changes finally allow greater time to work with students on digital citizenship along with health and wellbeing in a digital world?    There are lots of questions we can now seek to ask as we seek to explore the art of the possible in a world where Generative AI is now available.    These are interesting times.

The negative world

Reflecting back on the academic year and on the wider world there has been a lot of press in relation to the things which aren’t working as we wish they would, some of which having incalculable impact on those involved including leading to loss of life.    The cost of living crisis,  war in Ukraine, fuel cost crisis and many other negative events have flooded our TV news and our social media.   My concern here is that these negative events might blot out any of the good that may have been achieved.   The availability bias might come into play as all that comes easily to mind is negative, leading to increasing issues with mental health.    I worry that the news, including TV, social media, etc, not only reports event but also shapes future events and if this is the case, and the current news is more often negative than positive, what might the net result be for the future?    How do we achieve the balance in the world, in our countries, our towns, families and in our own lives where the prevailing message is that things are getting worse?   It was however heartening to hear Ty Goddard speaking at the LGfL conference talk of the need for hope, faith and leadership.    How do we lead people, including staff and students through this period where things feel so bleak, and how do we seek to cultivate the hope and faith that may be so important going forward?

Conclusion

The summer period is an opportunity for many to reset and recharge.  For myself and my team, it is an opportunity to get lots of the technology related upgrades, updates and other development work done while things are a little quieter;  I note we have summer holiday courses here throughout summer so things are never fully quiet.    It is also a key milestone and opportunity to reflect and also to look forwards and plan for the next academic year.     As I think about the implications of AI, I think the end of the academic year also represents an opportunity for us to look inwards;   Looking back and looking forwards in time is useful, but sometimes we need to be introspective and look at ourselves as humans, as individuals, considering health, wellbeing, resilience, etc.    We are more than a list of achievements, struggles or a list of plans and targets.

For me the year has had its ongoing challenges such as cyber risk and the generally negative context of the world, with these being an ongoing grind.  It has also had its positives in seeing the work of my team, in supporting technology use, being realised in classrooms with teachers confidently using technology and with the technology being almost transparent in its use.    And the year has also seen AI gain prominence, providing such potential or opportunities for the future.    It has been another busy, challenging but also rewarding year.  

So 2022/23 has ended.  Now we wait and prepare for 2023/24!

If… The ISA/ISC Conference

I was involved in the ISA/ISC Digital Conference a couple of weeks back and thought I would share some of my headline thoughts following the event.    I note it was a busy day, including actually being involved in leading a couple of the sessions, so the below represent the standout points for me based on the sessions I managed to see.

For me there were three main themes which stuck out for me:

  1. Challenge the status quo and overly simplistic language and imagery
  2. Awareness of reductive reasoning and binary arguments
  3. A focus on humanity and wellbeing in a world of increasing technology use.

Challenge the status quo.

There is a clear need to challenge the status quo and particularly some of the language and visuals used when discussing technology.     Laura Knight did a good job of drawing attention to this, and I will admit I have been as guilty as many others of using some of the loose, simplistic language and visuals to which she referred.   The picture of the smiling student wearing a VR headset, the use of The Matrix style graphics in relation to AI ( Note: definitely guilty of this! ) and the discussion of the printing press as an analogy of the change now presented by AI, among others. All simplistic and not really representative of the situation we now find ourselves in due to AI.    These simplistic images and equally simplistic language only go to strengthen false perceptions about technology.   One of my favourites is the use of the dark hooded, and normally male, figure when discussing cyber crime;  Not exactly representative of the organised cyber crime we see today. We need to do better.

Reductive reasoning and binary arguments       

I have long argued about binary arguments and how the world is seldom simple enough to be modelled with a black and white, good and bad, right and wrong argument.  The world is inherently messy and operates in many shares of grey, existing between the black and white of any two extreme positions.  The issue here is that extreme positions are suited to the world of social media where the content has to be short and anything which stimulates a response is a good thing.    The message and medium are entangled.   This is something we need to be aware of especially as it continues to encourage echo chambers and division rather than the critical discussion and reasoning we really need. 

As to reductive reasoning, I get that we often want to simplify things for people.   This might be through presenting a simple model or presenting a “50 ways” or “5 ways”, such as 50 prompts for use of AI or 5 basic cyber security basics.   And again, I am guilty of this, creating a framework of basics in relation to cyber security.   But again the world is seldom that simple, and although the model or list of ways, makes things easier for the reader or audience, it all too often over simplifies the issue being discussed.   Cyber security is more nuanced than 5 basic mitigation measures and AI prompt craft is way more nuanced than 50 prompts.   The challenge here is the balance between convenience and ease of use, and mirroring the complexity of the world we live in.   Too basic makes things seem too easy and therefore not representative of our world, where trying to model the real world will likely result in a model too complex for people to understand or to be useful.   The balance lies somewhere in between these points.

Wellbeing and Humanity

As technology plays an increasing part in our lives then it is likely that wellbeing will become more important.   Being human and the traits of being human will become more important.   We will need to consider how we support wellbeing in the face of the tsunami of digital content, both positive and negative;   how we will manage the uncertainty of cyber risk, how we will best use the AI solutions which could provide support and counselling however might also be designed to influence, manipulate and deceive.   The question of what it means to be human and how human intelligences differ from synthetic intelligences will become all the more important.    We need to make use of AI to do the tasks which it is good at, while identifying what it is that human intelligence is better at and is suited for.   Ethical use of AI also plays a part here as will we want to know when we are dealing with an AI rather than a human, and are we happy for AI to play a part in key life decisions such as those related to health and finance.   In the face of upsetting online content, are we happy with AI making the decisions to filter content on the grounds of wellbeing, where we know this might lead to bias in the content which we are provided, or where such a solution might be controlled by an outside agency to their own ends?

Conclusion

The conferences title was “if” and this is rather apt as I feel we find ourselves at a junction.   We had the catalyst which was the pandemic that propelled us forward in relation to technology use in schools, plus we are now presented with AI, potentially an even more significant catalyst.   What might the future hold?     The key in my view is that we need to consider this, we need to consider how the next 5 or more years might look, and how we might shape this future to be the positive outcome we would like, or even need, it to be.    How things might look in future will be directly influenced by the decisions we take now.   If we do X then this may lead to Y.    We need to grab hold of technology and seek to shape its use and through doing so shape our futures.