
This is my 2nd post following on from my session speaking on AI in education at the Embracing AI event arranged by Elementary Technology and the ANME in Leeds last week. Continuing from my previous post I once again look at the risks and challenges of AI in education rather than the benefits, although I continue to be very positive about the potential for AI in schools and colleges, and the need for all schools to begin exploring and experimenting.
Homogeneity
The discussion of AI is a broad one however at the moment the available generative AI solutions are still rather narrow in their abilities. The availability of multi-modal generative AI solutions is a step forward but the solutions are still rather narrow, being largely focussed on a statistical analysis of the training data to arrive at the most probable response, with a little randomness thrown in for good measure. As such, although the responses to a repeated prompt may be different, taken holistically they tend towards an average response and here in lies a challenge. If the responses from generative AI tend towards an average response and we continue to make more and more use of generative AI, wont this result in content, as produced by humans using AI, regressing to the mean? And what might this mean for human diversity and creativity? To cite an example, I remember seeing on social media an email chain where an individual replied asking the sender to not use AI in future, to which the sender replied, I didn’t use AI, I’m neuro-diverse. What might increasing AI use mean for those who diverge from the average, and what does it even mean to be “average”?
Originality
The issue of originality is a big one for education. The JCQ guidelines in relation to A-Level refer to the need for “All coursework submitted for assessment must be the candidate’s own work” but what does this mean in a world of generative AI? If a student has difficulty working out how to get started and therefore makes use of a generative AI solution to get them started, is the resultant work still their own work? What about a student who develops a piece of work but then, conscious of their SEN needs and difficulties with language processing, asks a generative AI solution to read over the content and correct any errors, or maybe even improve the readability of the piece of work; Is this still the students own work? Education in general will need to seek to address this challenge. The fact is that we have used coursework assessment evidence as a proxy for evidence of learning for some time, however we may now need to rethink this given the availability of the many generative AI solutions which are now so easily accessible. And before I move on I need to briefly mention AI and plagiarism detection tools; They simply don’t work with any reliability, so in my view, shouldn’t be used. I don’t think there is much more that needs said about such tools, other than that.
Over-reliance
We humans love convenience however as in most, if not all things, there is a balance to be had and for every advantage there is a risk or challenge. As we come to use AI more and more often due to the benefits we may become over-reliant on it and therefore fail to consider the drawbacks. Consider the conventional library based research; When I was studying, pre-Google, you had to visit a library for resources and in doing so you quite often found new sources which you hadn’t considered, through accidentally picking out a book or through using the reference list in one book, leading to another book, and onwards. The world of Google removed some of this as we could now conveniently get the right resources from our prompts. Google would return lists of sources but how many of us went beyond the first page of responses? Now step in generative AI which will not only provide references but can actually provide the answer to an assignment question. But the drawback is Google (remember Google search uses AI) and now Generative AI may result in a reduction in broader reading and an increasingly reliance on the google search or generative AI response. Possibly over time we might become less able, through over-use, to even identify when AI provides incorrect or incomplete information. There is a key need to find an appropriate balance in our use of AI, balancing its convenience against our reliance.
Transparency and ethics
Another issue which will likely grow in relation to AI is that of transparency and of ethics. In terms of transparency, do people need to know where an AI is in use and to what extent it is used. Consider the earlier discussion of student coursework and it is clear that students should be stating where generative AI is used, but what about a voice based AI solution answering a helpline or school reception desk; Does the caller need to know they are dealing with an AI rather than a human? What about the AI in a learning management platform; How can we explain the decisions made by the AI in relation to the learning path it provides a student? And if we are unable to explain how the platform directs the students and therefore are unable to evidence whether it may be positively or negatively impacting the student, is it therefore ethical to use the platform? The ethical question itself may become a significant one, focusing not on how we can use AI but on should we be using it for a given purpose. The ethics of AI are likely to be a difficult issue to unpick given the general black-box nature of such solutions although some solutions providers are looking at ways to surface the inner workings of their AI solutions to provide more transparency and help answer the ethical question. I however suspect that most vendors will be focussed on the how of using AI as this drives their financial bottom line. The question of whether they should provide certain solutions or configure AI in certain ways will likely be confined to the future and the post mortem resulting from where thing go wrong.
Conclusion
As I said at the outset I am very positive about the potential for AI in education, and beyond, but I also believe we need to be aware and consider the possible risks so we can innovate and explore, but safely and responsibly.