I recently presented at a conference in relation to AI and assessment. I think this was reasonably good timing given JCQ had just released further guidance in relation to student coursework and AI plus AQA had announced they were going to use online testing as part of their exam suite in the Italian and Polish GCSEs starting from 2016. I think this is a positive step forward in both cases however I think it is important that we see this journey as more than simply replacing pencil and paper exams with a hall full of students completing the same exams but as an online/digital exam. There is significant potential here to ask ourselves what are we seeking to assess, why are we seeking to assess it and how are we best to assess?
The SAMR model
The SAMR model is useful when looking at technology change programmes. The first element of SAMR is that of simple substitution, similar to the example I gave above in the introduction. The concern for me is that this might be the goal being aimed at where technology and AI present such significant potential beyond mere substitution, and where the world has moved at a fast technologically drive pace, yet our education system has changed little, and our key assessment methodologies, of terminal coursework and exams have barely changed at all.
In looking to progress beyond substitution it might be useful to unpick some of the limitations of the current system. For this purpose I am going to focus purely on terminal exams given they are such a significant part of the current formal education system in the UK. So what are the limitations of the currently accepted system?
Logistics
One of the key drawbacks in the current system, as I see it, is the massive logistical challenge it presents. Students have to be filed into exams halls across the country and the world all at the same time, to complete exam papers which have been securely delivered to exam centres. Its quite an undertaking and even more so when you consider trying to keep the papers and questions secure. In a world of technology where content can quickly and easily be shared it doesn’t take much before questions are out in the open ahead of the exam, advantaging those who have seen the information when compared with those who have missed it. Then you have the issue of gathering all the completed papers up, sharing them with assessors to mark, quality assurance of marking and then eventual release of results to students some months later. This is a world where technology supports the sharing of information, written, audio, video and more instantly. Why cant the exams process be quicker and more streamlined, making use of technology to achieve this?
Diversity
Another key drawback has to be that of diversity. We, more than ever, identify the individual differences which exist in us all. Discussion of neurodiversity is common at the moment but despite this we still file all students into a hall to complete the same exam paper. Now there are exam concessions which can be provided to students but this barely scratches the surface in my opinion. Where is the valuing of diversity in all of this?
Methodology
We also need to acknowledge that the current exams system very much values those students who are able to memorise facts, processes, etc. Memorisation is so key to exams success however out in the real world we have access to ChatGPT and Google to find the information we need when we need it, with the key then being how we then interpret, validate and apply this information to the challenges or work in front of us. Shouldn’t the assessment methodology align with the requirements of the world we live in? Now I will acknowledge the important of key foundational knowledge so I not suggesting we stop teaching any basic knowledge, but knowledge and memorisation should be less of a focus than it is now.
Conclusion
I believe technology could address a lot of the drawbacks listed above. Now I note the use of technology will present its own challenges but how often do we find the “perfect” solution? Wouldn’t a solution which is easier for schools to administer, is quicker and more efficient, is more student centred and more in line with the world we now live in be a good thing?
Following on from my last post looking at AI and assessment (see here) where I focussed very much on the high stakes world of terminal exams and coursework, I would now like to look towards formative assessment and the learning process. As with my last post, this post aims at sharing some of the points I made at a recent conference where I spoke on AI and Assessment, presenting some questions which I believe we need to increasingly consider in a world of AI and generative AI solutions.
AI Supported Learning
Learning platforms and computer based learning have existed for some time. And they havent and dont look like the image here. I remember having to do some Maths learning during my teaching degree using a computer based learning platform and that was in the mid to late 90’s. At the time I wasn’t that fond of these learning platforms and this feeling stayed with me. My issue was that the platforms although offering differing routes through the broad content, were largely linear in their offering in relation to each topic or even the smaller units of learning. This couldn’t compare to a teacher delivering content where they could see students struggling and then instantly seek to adjust the learning content accordingly.
We have came a long way from there, with AI and generative AI now able to provide us with far superior learning platforms with my sense being that these platforms tend to break into two types, one where the AI is analysing usage and interaction data to direction learning content creators and one, the more recent and emerging type, where generative AI provides an AI based support, teaching or coaching agent.
In the model where the platform analyses usage and interaction data the key benefit is that this data is gathered from all users looking for those common patterns or anomalies, looking at issues such as general, language, nationality, and a variety of other factors to find which learning content works and which does not. This allows creation of effective learning content based on a huge amount of data across many schools and many learners, far beyond the data that a teacher may have at their hands. As such the content in these platforms progressively improves over time and based on data rather than intuition or other less tangible factors, which may be wrong, which a teacher may rely on.
Where generative AI is used students get a chat bot which prompts and support students as they work through the learning content, with the AI trying to mirror the supportive and coaching role of a teacher, but individualised for each student and available any time, anywhere assuming access to a device and internet connection. I feel it is here that there is the greatest potential especially in relation to more fundamental skills and knowledge development, freeing up teachers to focus on more advanced concepts and also on wider issues such as resiliency, leadership, interpersonal skills, wellbeing, etc. I note recently reading a post about a school which uses AI where they don’t have “teachers” instead having “guides”. I suspect this sounds more radical that it is in practice especially the reported comment by the co-founder that “we don’t have teachers”. My view is that AI learning platforms wont replace teachers, however through the use of AI learning platforms working with teachers we may be able to achieve more and quicker with our students. I suspect the school is more akin to this partnership that the report would suggest however have no first hand experience of the school so cannot be sure.
Challenges
AI as a tool to assist and maybe guide and deliver learning delivers a number of benefits however I think it is important to acknowledge some of the challenges and risks. We may not have a solution at this point however at the very least we need to be aware.
Bias is a clear challenge and something which has been widely reported in relation to AI. In my session I asked a generative AI solution for a picture of a nurse and a picture of a doctor which the solution returning images where the doctor images were all of males and the nurse images all of females, and where all the images where of white people. This experiment clearly shows bias however the challenge in AI powered learning platforms is that the bias may not be so easily visible. What if the platform decides based on statistics that students from particular area, nation, gender, preference, age or other characteristic do generally worse than average. The platform may then present them content it believes to be appropriate to this ability level, in doing so impacting their ability to achieve, the challenge they receive, and possibly causing a self-fulfilling prophecy. And when a parent asks regarding a students learning path, is it ethical to use learning platforms if the use of a learning platform means we may not be able to explain the decisions taken in the child’s learning experience and journey, where these decisions were taken by AI?
Data is another challenge we need to consider here in the possible huge and growing wealth of data learning platforms might gather in relation to students. This isnt just the data a school might provide such as name, email and age, but the data produced through each and every interaction with the platform, plus the data gathered as diagnostic data such as the device being used, IP address, etc. And then there is the data a platform might be able to infer from the data gathered; Could an IP address, which suggests a rough geographic location, a device type and internet speed allow you to infer the wealth of a user or users family? I suspect it could. Now consider the massive amount of data gathered over time, across different curriculum subjects and each use of the platform; The potential for inference grows with each additional data point. How do we manage the risks here in relation to data protection, cyber risk and also accidental or purposeful mis-use of the data? If we are to use AI assisted learning solutions I think we need to ensure we have considered how we might do this safely.
Conclusion
Educations has had its challenges for some time including teacher recruitment, teacher workload and wellbeing, and equity of access to education. Maybe AI can help with some of this and maybe AI risks making things worse in some areas; It is difficult to tell, although the one thing we can tell is that AI is here and here to stay so I think we need to make the most of it and shape its use to be as positive and powerful as it potentially can be. A difficulty here however is the slow pace with which education changes (little has changed in almost 100yrs!). Now the pandemic did cause some change in my view, but some of that has rubber banded back to pre-covid setups. The question now is, is AI the next catalyst for education change, will it impact education as much or more than the pandemic and will its impact be persistent beyond the initial “shiny new thing” period. Only time will tell although my sense is there is potential for AI to answer in the affirmative to all three questions.
Darren White posted an interesting question on twitter the other day in relation to the standards we hold AI to. Should AI be held to higher standards than humans? This is something I have been given some thought to due to both having an interest in human heuristics and bias, plus an interest in artificial intelligence.
Discussions on AI
There is already a lot of discussion regarding issues and challenges related to AI including discussion of bias and inaccuracy or “hallucinations”. I myself have been able to recreate these two issues reasonably easily within generative AI solutions, firstly asking an image generation solution to create a picture of a nurse in a hospital setting and then a doctor in a hospital setting. In this case the images were all of white individuals with the nurses all female and the doctors all male. The evidence of bias was clear to see. And in a separate experiment with a tool to help with report writing, the developer forgot to provide any data in relation to the fictitious student for which a report was being created but the tool simply made the report content up. These issues are therefore clear to see and it is easy to jump to a standpoint where bias needs to be removed and inaccuracies or hallucinations stopped.
A human view
One of the issues here is that I believe we need to take a cold hard look at ourselves, at human beings and how we might respond to prompts if such prompts were direct at us rather than an AI. Would we fair so much better than and AI? I have a lovely poster in my office in relation to the cognitive biases which impact on human decision making and there has been plenty written about this and heuristics, with Daniel Kahneman’s book, Thinking, fast and slow, being one of my favourites. A key issue here is that we are often not aware of the internal or “fast” bias which impacts on us and therefore may assess our biased decisions as being absent of bias. In terms of hallucinations, again we humans suffer the same issue often stating facts based on memory, and holding to these facts even when presented with contradictory evidence. The availability and confirmation biases may be at play here. Another challenge when comparing with AI is that our biases and hallucinations are not clear for us to see, albeit they may be clear to others, yet with AI bias and hallucinations, at least in the form of those raised as examples, it is clear for all to see.
End point?
I would suggest that in both AI and in human intelligence our ideal would be to remove bias and inaccuracy. I would also suggest although this is a laudable aim it is also impossible. As such, rather than focussing on the end we need to focus on the journey and how we might reduce the bias and reduce the inaccuracies both in humans and in AI. It may be that in reducing bias in humans this may benefit AI, however it may also be possible that things work the other way and discoveries to help reduce bias in AI may help with bias in humans. I note that a lot of human thinking, especially our fast thinking, can be reduced to heuristics or “generalisations” or “rules of thumb”; How is this much different to the quick processing of an generative AI solution? Does generative AIs probabilistic nature not tend towards quick creation of generalisations but based on huge data sets?
The future
So far, I have avoided getting pulled into the future and artificial general intelligence and I mention it for completeness only. This will likely arrive in the future and most who claim to be AI experts seem to agree with this however there is much disagreement as to the when. As such our immediate challenge is that of the generative AI we have now and its advancement over the creation of an AI solution capable of more generally out thinking us across different domains; That said I would suggest that in a number of ways generative AI can already out perform us across many domains.
Conclusion
So back to the question in hand and whether we should seek to hold AI up to higher standards? We should seek to avoid outcomes which have a negative impact on humankind so bias and inaccuracy and also the other challenges in relation to intelligence, such as equality of access to education, etc, are all things we should seek to reduce. This I think is a common aim and can be applied to humans and AI. In terms of the accepted standard, I think it is currently difficult to hold AI up to a higher standard than we hold humanity given the solutions are created by humans, trained on human supplied data and used by humans. It may be that in AI solutions you get a glimpse of how entrenched some of our human biases actually are. That said I also think it might be easier to remove bias and inaccuracies in an AI solution as compared to doing the same with a human; I doubt the AI will seek to hold onto its position or to counter argue a view point, at least not yet.
I recently spoke at an AI event for secondary schools in which one of the topics I spoke on related to AI and its impact on Assessment. As such I thought I would share some of my thoughts, with this being the first of two blogs on the first of the sessions I delivered..
Exams
Exams, in the form of terminal GCSE and A-Level exams still form a fairly large part of our focus in schools. We might talk about curriculum content and learning but at the end of the day, for students in Years 10,11, lower 6 and upper 6 the key thing is preparing them for their terminal exams as the results from these exams will determine the options available to students in the next stage of their educational journey. The issue though is that these terminal exams have changed little. I provided a photo of an exam being taken by students in 1940 and a similar exam in recent terms and there is little difference, other than one photo being black and white and the other being colour, between the photos. The intervening period has seen the invention of DNA sequencing, the mobile phone, the internet and social media, and more recently the public access to generative AI but in terms of education and terminal exams little has changed.
One of the big challenges in terms of exams is scalability. Any new solution needs to be scalable to exams taken in schools across the world. Paper and pencil exams, sat by students across the world at the same time accommodates for this. If we found life on Mars and wanted them to do a GCSE, we would simply need to translate the papers into Martian, stick the exams along with paper and pencils on a rocket and fire them to Mars. But just as it is the way we have done things and the most easily scalable solution doesn’t make paper and pencil exams the best solutions. But what is the alternative?
I think we need to acknowledge that a technology solution has to be introduced at some point and the key point is the scalability based on schools with differing resources. As such we need a solution which can be delivered in schools with only 1 or 2 IT labs, rather than enough PCs to accommodate 200 students being examined at once as is the case with paper based exams. So we need a solution which allows for students to sit the exams in groups, but without compromising the academic integrity of the exams where student share the questions they were presented with. The solution, in my view is that of adaptive testing as used for ALIS and MIDYIS testing by the CEM. Here students complete the test online but are presented different questions which adapt to students performance as they progress. This means the testing experience is adapted to the student, rather than being a one size fits all as with paper exams. This helps with keeping students motivated and within what CEM describe as the “learning zone”. It also means as students receive different questions they can sit the exam at different times which solves the logistical issue of access to school devices. Taken a step further it might allow for students to complete their exams when they are ready rather than on a date and time set for all students irrespective of their readiness.
AI also raises the question of our current limited pathways though education, with students doing GCSES and then A-Levels, BTecs or T-Levels and then onto university. I believe there are 60 GCSE options available however most schools will offer only a fraction of this. So what’s the alternative? Well CalTech may provide a possible solution; They require students to achieve calculus as an entry requirement yet lots of US schools don’t offer calculus possibly due to lack of staff or other reasons. CalTechs solution to this has been to allow students to evidence their mastery of calculus through completion of an online Khan Academy programme. What if we were more accepting of the online platforms as evidence of learning and subject mastery? There is also the question of the size of the courses; GCSEs and A-Levels and BTec quals are all 2 years long but why couldn’t we recognise smaller qualifications and thereby support more flexibility and personalisation in learning programmes? In working life we might complete a short online course to develop a skill or piece of knowledge on a “just-in-time” basis so why couldn’t this work for schools and formal education? The Open University already does this through micro credentials so there is evidence as to how it might work. I suspect the main challenges here are logistical in terms of managing a larger number of courses from an exam board level, plus agreeing the equality between courses; Is introductory calculus the same as digital number systems for example?
Coursework
Coursework is also a staple part of the current education system and summative assessment. Ever since Generative AI made its bit entrance in terms of public accessibility we have worried about the cheating of students in relation to homework and coursework. I suspect the challenge runs deeper as a key part of coursework is its originality or the fact that it is the students own work but what does that look like in a world of generative AI. If a student has special educational needs and struggles to get started so uses ChatGPT to help start, but then adjusts and modifies the work over a period of time based on their own learning and views, is this the students own work? And what about the student who does the work independently but then before submitting asks ChatGPT for feedback and advice, before adjusting the work and submitting; Again, is this the students own work?
There is a significant challenge in relation to originality of work and independent of AI this challenge has been growing. As the speed of new content generation, in the form of blogs, YouTube videos, TikTok, etc, has increased year on year, plus as world populations continue to increase it become all the more difficult to be individual. Consider being original in a room of 2 people compared with a room of 1000 people; The more people and the more content, the more difficult it is to create something original. So what does it really mean for a piece of work to be truly original or a students own work?
The challenge of originally and students own work relates to our choice of coursework as a proxy for learning; It isnt necessarily the best method of measuring learning but it is convenient and scalable allowing for easy standardisation and moderation to ensure equality across schools all over the world. It is easy to look at ten pieces of work and ensure they have been marked fairly and in a similar fashion; Having been a moderator myself this was part of my job visited schools and carrying out moderation of coursework in relation to IT qualifications. If however generative AI means that submitted content is no longer suitable to show student learning, maybe we need to look at the process students go through in creating their coursework. This however has its own challenges in terms of how we would record our assessment of process and also how we would standardise or moderate this across schools.
Questions
I don’t have a solutions to the concerns or challenges I have outlined, however the purpose of my session was to stimulate some though and to pose some questions to consider. The key questions I posed during the first part of my session were:
Do we need an annual series of terminal exams?
Does there need to be [such] a limited number of routes through formal education?
Why are courses 2+ years long?
Should we assess the process rather than product [in relation to coursework]?
How can we assess the process in an internationally scalable form?
These are all pretty broad questions however as we start to explore the impact of AI in education I think we need to look broadly to the future. In terms of technology the future has a tendency to come upon us quickly due to quick technology advancement and change, while education tends to be slow to adapt and change. The sooner we therefore seek to answer the broad questions or at least think about them the better.
This is my 2nd post following on from my session speaking on AI in education at the Embracing AI event arranged by Elementary Technology and the ANME in Leeds last week. Continuing from my previous post I once again look at the risks and challenges of AI in education rather than the benefits, although I continue to be very positive about the potential for AI in schools and colleges, and the need for all schools to begin exploring and experimenting.
Homogeneity
The discussion of AI is a broad one however at the moment the available generative AI solutions are still rather narrow in their abilities. The availability of multi-modal generative AI solutions is a step forward but the solutions are still rather narrow, being largely focussed on a statistical analysis of the training data to arrive at the most probable response, with a little randomness thrown in for good measure. As such, although the responses to a repeated prompt may be different, taken holistically they tend towards an average response and here in lies a challenge. If the responses from generative AI tend towards an average response and we continue to make more and more use of generative AI, wont this result in content, as produced by humans using AI, regressing to the mean? And what might this mean for human diversity and creativity? To cite an example, I remember seeing on social media an email chain where an individual replied asking the sender to not use AI in future, to which the sender replied, I didn’t use AI, I’m neuro-diverse. What might increasing AI use mean for those who diverge from the average, and what does it even mean to be “average”?
Originality
The issue of originality is a big one for education. The JCQ guidelines in relation to A-Level refer to the need for “All coursework submitted for assessment must be the candidate’s own work” but what does this mean in a world of generative AI? If a student has difficulty working out how to get started and therefore makes use of a generative AI solution to get them started, is the resultant work still their own work? What about a student who develops a piece of work but then, conscious of their SEN needs and difficulties with language processing, asks a generative AI solution to read over the content and correct any errors, or maybe even improve the readability of the piece of work; Is this still the students own work? Education in general will need to seek to address this challenge. The fact is that we have used coursework assessment evidence as a proxy for evidence of learning for some time, however we may now need to rethink this given the availability of the many generative AI solutions which are now so easily accessible. And before I move on I need to briefly mention AI and plagiarism detection tools; They simply don’t work with any reliability, so in my view, shouldn’t be used. I don’t think there is much more that needs said about such tools, other than that.
Over-reliance
We humans love convenience however as in most, if not all things, there is a balance to be had and for every advantage there is a risk or challenge. As we come to use AI more and more often due to the benefits we may become over-reliant on it and therefore fail to consider the drawbacks. Consider the conventional library based research; When I was studying, pre-Google, you had to visit a library for resources and in doing so you quite often found new sources which you hadn’t considered, through accidentally picking out a book or through using the reference list in one book, leading to another book, and onwards. The world of Google removed some of this as we could now conveniently get the right resources from our prompts. Google would return lists of sources but how many of us went beyond the first page of responses? Now step in generative AI which will not only provide references but can actually provide the answer to an assignment question. But the drawback is Google (remember Google search uses AI) and now Generative AI may result in a reduction in broader reading and an increasingly reliance on the google search or generative AI response. Possibly over time we might become less able, through over-use, to even identify when AI provides incorrect or incomplete information. There is a key need to find an appropriate balance in our use of AI, balancing its convenience against our reliance.
Transparency and ethics
Another issue which will likely grow in relation to AI is that of transparency and of ethics. In terms of transparency, do people need to know where an AI is in use and to what extent it is used. Consider the earlier discussion of student coursework and it is clear that students should be stating where generative AI is used, but what about a voice based AI solution answering a helpline or school reception desk; Does the caller need to know they are dealing with an AI rather than a human? What about the AI in a learning management platform; How can we explain the decisions made by the AI in relation to the learning path it provides a student? And if we are unable to explain how the platform directs the students and therefore are unable to evidence whether it may be positively or negatively impacting the student, is it therefore ethical to use the platform? The ethical question itself may become a significant one, focusing not on how we can use AI but on should we be using it for a given purpose. The ethics of AI are likely to be a difficult issue to unpick given the general black-box nature of such solutions although some solutions providers are looking at ways to surface the inner workings of their AI solutions to provide more transparency and help answer the ethical question. I however suspect that most vendors will be focussed on the how of using AI as this drives their financial bottom line. The question of whether they should provide certain solutions or configure AI in certain ways will likely be confined to the future and the post mortem resulting from where thing go wrong.
Conclusion
As I said at the outset I am very positive about the potential for AI in education, and beyond, but I also believe we need to be aware and consider the possible risks so we can innovate and explore, but safely and responsibly.
I once again had the opportunity to speak in relation to AI in education earlier in the week, this time at the Elementary Technology and ANME event in Leeds. Now this time my presentation was very much focussed on the risks and challenges of AI in education rather than the benefits, leaving the benefits and some practical uses of AI to other presenters and to the workshop style sessions conducted in the afternoon. This post marks the first post of two looking at the risks and challenges I discussed during my session.
Bias
The potential for Bias in AI models and in particular in the current raft of Generative AI solutions was the first of the challenges I discussed. In order to illustrate the issue I made use of Midjourney asking it separately for a picture of a nurse in a hospital setting and then for a picture of a doctor, in both cases not stating the gender and allowing the AI to infer this. Unsurprisingly the AI produced 4 images of a female nurse and 4 images of a male doctor easily demonstrating an obvious gender bias. Now for me the bias here is obvious and therefore easily identified and corrected through an appropriate prompt asking for a mix of genders, but such bias are not always so identifiable. What about the potential for bias in learning materials presented to a student via an AI enabled learning platform, or the choice of text returned to a student by a generative AI solution? And if we cant identify the bias how are we to address it? I will however note at this point we also have to consider human bias, as it is unfair to expect an AI solution to be without bias when we developed the solution, provided the training data, etc, and we are not without bias ourselves.
Data Privacy
Lots of individuals, including myself, are already providing data to AI solutions but do we truly know how this data will be used, who it might be shared with, what additional data might be inferred from it, etc, and we need to know this now as it is currently, but also the future intentions of those we provide data to. The DfE makes clear that school personal data shouldn’t be provided to generative AI solutions however what if attempts are made to pseudonymize the data; What level of pseudonymization is appropriate? And then there is the issue of inferred data; I recently heard the suggestion that, if we fed all of our AI prompts back into an AI solution and asked it to provide a profile for the user, it would do a reasonable job of the task possibly identifying age, work sector and more. AI and generative AI offer a massive convenience, efficiency and speed gain however the trade off is giving more data away; Is this a fair trade off and one which we are consciously accepting?
Hallucinations
The issue of AI presenting made up information was another one which I found easy to recreate. I note this is often referred to a “hallucination” however I am not keen on the term as it anthropomorphises the current generative AI solutions we have when I still believe the solutions we currently have are still narrow in terms of their focus and therefore more akin to Machine Learning, a subset of the broader AI technologies. To demonstrate this issue I used a solution we have been working on which helps teachers generate parental reports, putting a list of teacher provided strengths and areas for improvement into readable sentences which teachers can then review and update. We simply failed to provide the AI with any strengths or areas for improvement. The AI however still went on to produce a report however in the absence of any teacher provided strengths or areas for improvement, it simply made them up. For me this highlights the fact that AI solutions cannot be considered as a replacement for humans, but instead are a tool or assistant to humans.
Cyber
The issue of cyber security or Information security and AI is quite a significant one from a variety of different perspectives. First there is the potential use of AI in attacks against organisations including schools. The existence of criminally focussed generative AI tools has already been reported in WormGPT and FraudGPT. Generative AI makes it easy to quickly create believable emails or usable code, independent of whether the purpose of the code is benign or if it for a phishing email or malware. Additionally there is the issue of AI as a new attack surface which cyber criminals might seek to leverage. This might be through the use of prompt injection to manipulate the outputs from AI solutions, possibly providing fake links or validating organisations or posts which are malicious or fictious. Attacks could also involve poisoning of the AI model itself, such that the models behaviour and responses are modified to suit the malicious ends of an attacker. And these are only a couple of implications in relation to AI and cyber security.
Conclusion
I think it is important to acknowledge that my outlook on AI in general and in education is a largely positive one, however I think it is important that we are realistic and accept the existence of a balance; a balance between the benefits and the risks and challenges, where to make use of the benefits we need to be at least aware and consider the possible balancing drawbacks and risks. This post therefore is about making sure we are aware of the risks, with my next post digging into a few further risks and challenges.
As Darren White put it in his presentation at the Embracing AI event, “Be bold but be responsible”
Last week I presented at a conference, speaking about AI in education, so what better way to create the presentation than to actually use AI tools. So, I thought I would share some experiences of the process.
The main tool I made use of in preparing my presentation was Canva which I became aware of after seeing Darren White do a short demo of it at a meeting a couple of months ago. Canva allowed me to get the ball rolling quickly and easily, using their Magic Create functionality to create the bare bones of my presentation including some nice graphics with the only requirement from my end being a single sentence as a prompt.
Now, the presentation needs to be something I was happy delivering, something that includes a bit of my identity, experience and outlook. The Canva AI generated presentation, although it included a simple structure with some key points, it just wasn’t me. But it did give me a good starting point including graphics after maybe 1 or 2 minutes of effort as opposed to the ½ hour it would likely have taken me to get to that point.
At this stage I set about moving slides around and adding new slides to build a structure for the session which felt a bit more like me and something I would present. I note, I could have possibly refined my prompt and worked at it that way, however for me it was easier to work directly with the slides as I sought to align them with the thinking in my head, where sometimes it wasn’t the slides on the screen which were being reordered but the order in my head. As I continued to work on this the presentation started to take shape. Finding graphics and images for the slide was easy using Canva’s search tools, and from there it was easy to drop images straight into my presentation, and where the images werent quite right I could easily change them using the AI image editing tools in Canva. I could easily remove elements of an image or change elements at will in order to get me the image I needed, which best suited the slide I was working on.
Additionally I made a little use of MidJourney and DALL-E2 to generate additional impacts, plus used ChatGPT for the development of additional text content and some of my script. As with most technology usage, it was switching between different generative AI tools for different purposes and I suspect, I could have used even more apps if I felt appropriate; I note I suspect the core of Canva, ChatGPT, MidJourney, DALL-E and maybe Bard should be good enough for most possible purposes.
Did the AI tools do the job for me?
No, I was looking to create a presentation, where I would be presenting my thoughts and ideas. Generative AI doesn’t (yet) have access to my thinking, where this thinking was constantly changing and evolving, with me refining my message as I built my presentation. What generative AI does provide however is tools to make things easier, quicker and more efficient, so I could create the bare bones of a presentation in a couple of minutes rather than 30 minutes. I could find images and quickly insert in moments rather than spending time searching via google or image tools, plus I could easily change images to suit my needs including changing their composition, all taking minutes rather than the hours this might have taken me in the past manipulating images in Photoshop.
Generative AI is a powerful tool to help me do the basics quickly allowing me to spend more time making the presentation I was creating a reflection of me, of a human being with experience, skills and a personal, albeit often changing, outlook on the world, on education and on technology.
Conclusion
Now I hope the presentation was well received but only the feedback will tell me that, although it did seem to go reasonably well. I suspect through the use of generative AI tools I spent less time on the actual slide designs and more time on the actual content of the session and on what I was going to say. Hopefully this made for a more engaging session. I think the key takeaway as that AI, as it is now, doesn’t do things for you, it isnt close to replacing us humans, but it can make us more effective and efficient. It makes me think back to that old quote about teachers and tech; Technology wont replace teachers but teachers that use tech will replace those who do not. In the world of generative AI the word “technology” can be replaced by what I believe to be one of the most disruptive technologies we have seen an decades; AI. The question therefore is how do we ensure the disruption is to the betterment of us as individuals, as groups and organisations, and society as a whole. How do we use and work with AI, while being aware and conscious of any risks or drawbacks?
Last week I presented at an event for schools, speaking in relation to AI in education. As such I thought I would share the main points from my session. Now the session itself was broken into three main sections, being some context, the short term implications and the implications beyond the short term.
Context
The first point I made was on the current post ChatGPT discussion in relation to AI, and how AI itself isnt new. In fact solutions we use in everyday life, such as Siri and Alexa, such as Google Maps and search and facial recognition all make use of AI. Although generative AI began to be so easily accessible in ChatGPT in November of 2022, AI had been around for quite a while prior to this and had already formed a big part of our lives. I also acknowledged that independent of whether schools do anything in relation to generative AI, including ChatGPT, our students will largely already be using these solutions; An examination of internet traffic in my own school saw an increase in student daily use between Jan and March 2023, at which point we stopped tracking the data as generative AI started to appear in many different solutions. And this is a key point, that if schools do nothing, and leave the use of AI solutions to chance, both in the hands of their teachers and their students, AI solutions will be used whether this is appropriate, safe or not.
In looking more broadly at AI, I would suggest that it represents a continuum between extremes of narrow AI solutions, which are capable of a single activity, up to the holy grail which is AGI (Artificial general intelligence) where the AI solution is capable of the broad spectrum of human activities. Where we are currently is heavily down the left hand, and narrow AI side of things and I suspect we will be there for a while. Looking at the responses of 350 AI experts in relation to when there will be a 50% chance of an AGI existing, 50% said this would occur in the next 40yrs however to increase the confidence to 90% of experts, you need to look out to 100 years time. There is little consistency in the responses other that almost all of the experts predicted AGI would occur at some point in the future.
The short term?
Coming back to the present day and the challenges of generative AI, it is also important to acknowledge the challenges in education more generally. A 2022 Teacher Wellbeing Index showed 59% had considered leaving the education sector during the year due to mental health and wellbeing pressures while 68% said volume of workload was an issue making them consider leaving the profession. And it is here that maybe AI can start to help in addressing some of the workload issues, and through this hopefully reducing stress and pressures on mental health. Through the use of AI, administrative burdens such as policy and resource creation, marking, parental reports, meeting minutes and reading of minutes, and many other takes can be lightened. Now in all cases there still needs to be a human element to review, amend and improve AI generated content, but through humans working with AI tools we should be able to accomplish things quicker and easier.
And generative AI isnt limited to the boring and administrative tasks, but can also help with the creative tasks, which in my case, I am not particularly strong at. Being a poet, artist, musician, videographer and similar has never been my strong suit however with AI I can create things which previously may not have been possible. Having asked ChatGPT for a poem on the impact of AI on education, for example, I was impressed by the output.
So what are schools to do?
I think the first thing is to acknowledge that AI comes with risks and benefits and that you cannot have one without the other. As such the first thing a school needs to do is have a discussion and establish what their risk appetite is. Does the school want to make the most of all the benefits of AI, and therefore is willing to accept some degree of risk, or is the school risk averse and therefore not willing to risk making use of AI? Once risk appetite has been established it is now possible to set some ground rules an guidance for staff and students and this is where schools need to put an appropriate policy in place in relation to AI use. This policy should cover things such as the legal implications, such as GDPR and data protection related, the ethical considerations and also how risks and benefits need to be considered. Equally any policy in relation to AI needs to be aligned with the wider school values and vision, plus needs to be regularly reviewed and updated. Now that a policy is in place, the next step is to speak to students and staff about AI, about the risks and benefits and about the policy requirements. Once basic awareness is in place, you can then begin exploring and experimenting with AI solutions including the many generative AI solutions which are now so freely available.
Beyond the short term
Moving beyond the now and the short term where we can clearly establish some steps which schools should be taking, we move into the more unpredictable future, where the questions are more questions for education in general, rather than things schools can easily individually action.
One of the first challenges or questions relates to originality and we are already seeing this in the actors strikes and in a number of copyright actions being taken against Generative AI vendors. What does it mean to be original in the world of ready access to generative AI? The JCQ guidance for example states that “All coursework submitted must be the candidates own work” but does that mean a student cant used generative AI to help or as a starting point, or a dyslexic student cant get the help of generative AI? And to complicate matters even further, consider what being original might mean in the time of the romans; Basically you couldn’t write or say the same as someone else, but at that time there were less people in the world and little was written down for comparison. Now we live in a world with more people, writing more often and in more forms than ever before and that’s even before we consider how people might now use generative AI to create yet more content, much quicker than they did before. So what chance do you we have in being original or presenting our “own” work? This is a big challenge for education, especially given our current system uses coursework as an easy proxy for learning.
We also need to consider how the fundamental process of education, with students going to schools, colleges and universities may need to change. A perfect example is how students unable to study calculus at school can now meet the requirements of CalTech in relation to calculus through the use of 2 online platforms. Basically, students can prove their master online rather than in a school, then progress onwards to CalTech. It is likely therefore that new avenues of education progression, access to education and whole new programmes may appear as we move forward, but how will this impact the schools and colleges we have today; It may be in the future that they look significantly different to how they look today.
And the third of my future gazing thoughts, and the most significant in my eyes, is the access to online AI based tutoring for students. This potentially provides every student with 1:1 access to support for their learning rather than the division of a teachers time across the whole class. Additionally, this support is available 24/7/365. This will likely impact on the core subjects at basic learning levels initially so basic maths, English and science in the first instance before broadening out to other subjects. It may be this online personalised education which has the biggest impact, freeing up teachers to focus on some of the areas which have long needed time in the curriculum, but long gone without; mental health, resilience, digital citizenship among other areas. It will allow teachers to spend more time on the things which matter most about being human, having had time freed up by AI in relation to the things an AI can do reliably well.
Conclusion
AI is here now so all schools need to act as staff and students will use the available tools as they see fit if they do not receive any training or guidelines from the school. As such all schools, in my view, should have a policy on AI use within the school as a minimum. In terms of the potential of AI as is now available, I referenced my own use of Canva, ChatGPT, MidJourney and DallE-2 in the creation of my presentation and presentation content.
Looking out beyond the short term, things are not quite as certain with more questions than answers. One thing I think we can be reasonably sure of is that AI’s impact on education will only increase and it may lead to some fundamental questioning of our current educational system and approaches to education. And at some point in the future the singularity, where AI intelligence exceeds that of humans, will likely be reached and at that point I suspect the world, and education, may look very different to today.
I was browsing the internet looking at recent news and I spotted the below at the bottom of a particular article:
This got me thinking, is this the way of things, that we will start seeing notes at the bottom of articles, blog posts, etc, stating that “this was crafted with the help of generative AI tools”. It feels ok from a transparency point of view, in that the organisation in question is being transparent as to how the article was created but could this simply be to absolve them from any issues arising from bias or inaccuracies resulting from the use of an AI solution? Also, what about those less scrupulous organisations; will they bother to let us know about the use of a Generative AI (GenAI) solution or will they simply post articles quickly and easily without any due care and attention?
Taking this and considering the implications for education, what if students took the same approach and simply put in their referencing that their coursework, thesis, dissertation or other work was “written with the help of generative AI”. Would this be acceptable? I feel this is all falling into the trap of compliance; The author of an article or the student, ahead of submitting their work, simply puts the statement in place so they can tick a box and say they are compliant and transparent when in fact they have told the reader or marker very little. How much “help” did the GenAI solution provide? Did it provide the basic outline to start with or did it write the whole thing, aside from a couple of minor sentence changes? The extent of the “help” matters greatly! Or does it?
I suppose the key question here is why do we need to know if GenAI was involved in the creation of a piece of content? Is it due to the fact it may contain bias and inaccuracies? I suspect not as I would expect a journalist or editor to take responsibility and check any GenAI content before it is published. The same goes for a student, I would expect they have thoroughly checked the work before handing it in; it is their responsibility not that of GenAI. Is the reason we need to know due to an uncomfortable feeling in relation to AI created content? Consider reading two pieces of text providing a summary of a sporting event; If you were told one was written by a human and the other by a GenAI solution, would you have a preference and where does this preference, which I suspect would be towards reading the human written work, come from? Is the reason that we need to know that the work is the work of the student or author so we can direct or praise or complaints? But do we acknowledge the word processing software used, the web browser used for carrying our research, the laptop the content is typed on? Is AI a tool in the creation of the content or is it more than just a tool? If the piece of work produced with the help of GenAI, be this help little or significant, is a good piece of work does it matter? We used to focus on mental arithmetic, considering the use of a calculator to be cheating, yet now a calculator is just a tool we can use to help with maths; how is the use of GenAI any different?
I worry that the newspaper that placed this little rider at the bottom of their article is approaching the use of GenAi far too superficially without considering the wider impact. There are many unanswered questions in relation to GenAI with a small number of them presented above.
Or maybe I just need to accept that at least they have made an effort and a start as to how we become more transparent in the increasing use of GenAI in the creation of online content?
The other day saw me attend a meeting at the Elementary Technology offices in Leeds, meeting with a number of EdTech legends (and me!) to plan an artifical intelligence (AI) conference event due to occur in October. The planning event was a brilliant opportunity to discuss all things AI and education with some excellent and varied discussions occurring across two days.
In thinking about my personal use of AI it became clear to me that my own use is still short of what is possible, where there is such potential for me to make greater use of generative AI solutions in a way that will improve my productivity, my creativity and also hopefully my wellbeing through gains in efficiency.
As I sat on the train on the way home typing this I considered how I might make better use of AI. Now I could use it to help me write this post, however this post is very much a personal reflection, where AI cant really help although I may be able to use AI to help adjust and improve the post following initially drafting it. I could also use it to create some interesting images with me in different locations or situations, which although fun to do, is unlikely to enhance my work day significantly. So, what can AI help me with and how may I create situations where it is easier or more convenient for me to make use of AI?
In drafting emails, policies, reports or other documents I suspect generative AI can certainly help. Also in relation to the creation of presentations there is potential for the use of Generative AI, with Darren White demonstrating the impressive functionality in Canva in relation to creating both content and design within a presentation. I suspect I may use this in preparing for some of the talks I am due to give in the year ahead.
The key though to achieving the benefits is in making it easier for me to use AI solutions at the point I need them. My solution to this is to look to include ChatGPT and Bard along with some other AI tools within my “normal day” collection in MS Edge so that they are instantly opened when I begin my work day, ready to use as and when needed. I also need to spend a bit of time investigating AI powered plug-ins which can put the functionality right in the browser ready to access.
The potential for AI is significant and the two days of discussion were definitely useful. I now look forward to the actual conference event on the 3rd of October and to sharing thoughts and ideas with a variety of colleagues in UK schools/colleges and beyond.