Dangers of AI in education

Am now onto my third post in my AI posts following the Times, “AI is clear and present danger to education” article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated with AI (see here) however now I would like to give some thought to the potential negatives.    Now I may not cover all the issues identified in the article however I will hope to address the key issues as I see them.

The need for guardrails around AI

One of the challenges with technology innovation is the speed with which it progresses.  This speed, driven by the wish of companies to innovate, is so quick that often the potential implications aren’t fully explored and considered.   Did we know about the potential for social media to be used to promote fake news or influence political viewpoints for example?   From a technology company point of view the resultant consequences may be seen as collateral damage in the bid to innovate and progress whereas others may see this as more a case of companies seeking profit at any cost.   One look at the current situation with social media shows how we can end up with negative consequences which we may wish we could reverse.   But sadly once the genie is out the bottle, it is difficult or near impossible to put back plus it does seem clear from social media that companies ability and will to police their own actions is limited.    We do however need to stop and remember the positives in social media, such as the ability to share information and news at a local level in real time, connectedness to friends and family irrespective of geographic limitations, leisure and entertainment value and a number of other benefits.

So, with a negative focus, the concern here in relation to the need for AI “guardrails” sounds reasonably well founded however who will provide these guardrails and if it is government for example, wont this simply result in tech companies moving to those countries with less guardrails in place. Companies are unlikely to want to slow down as a result of adhering to government guardrails where this may result in them ceding advantage to their competitors.    And in a connected world it is all the more difficult to apply local restrictions, especially as it is often so easy for end users to simply bypass such restrictions.    Also, if it is government, are government necessarily up to date, skilled, impartial, etc, to make the right decisions?    There is also the issue of the speed with which legislation and “guardrails” can be created, as the related political processes are slow especially when compared with the advancement of technology, so by the time any laws are near to having been passed the issues they seek to address may already have evolved into something new.  To be honest, the discussion of guardrails goes beyond education and is applicable to all sectors which AI will impact upon, with this likely to be most if not all sectors of business, public services, charities, etc.

Cheating

There has been lots of discussions of how students might make use of AI solutions to cheat, with risks to the validity of coursework being particularly notable.    There is clearly a threat here if we continue to rely on students submitting coursework which they have developed on their own over a period of time.   How do we know it is truly the students own work?    The only answer I can see for this is teacher professional judgement and questioning of their students but this approach isn’t scalable.    How can we ensure that teachers across different schools and countries question students in the same way, and make the same efforts to confirm the origin of student work?    Moderation and standardisation process used by exam boards to check teacher marking is consistent across schools won’t work here.    We will also need to wrestle with the question of what does it mean for submitted work to be the students “own” and “original” work.   Every year students submit assessments, and more and more gets written online, and now AI adds to the mix, and with this growing wealth of text, images, etc, the risk of copying, both purposely or accidentally continues to increase.   The recent course cases involving Ed Sheeran are, for me, an indication of this.     When writing and creating was limited to the few, plagiarism was easy to deal with, but in a world where creativity is “democratised” as Dan Fitzpatrick has suggested will occur through use of AI, things are not so simple.

Conclusion

The motives of tech companies for generating AI solutions may not always be in the best interests of the users.  They are after all seeking to make money, and in the iterate and improve model there will be unintended consequences.   Yet, the involvement of government to moderate and manage this innovation isn’t without its consequences, including where some governments own motives may be questionable.   

In looking at education, the scalable coursework assessment model has worked for a long period of time however AI now casts it into question, but was its adoption about being the right way to measure student learning and understanding, or simply the easiest method to do this reliably at scale?  

Maybe the key reason for AI being a threat is the fact that, if we accept it is unavoidable, it requires us to question and critique the approaches we have relied on for years, for decades and even for centuries.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment