OABMG Conference

I was lucky enough to be invited to speak at the Oxfordshire Academies Business Managers Group (OABMG) annual conference earlier in the week where I was speaking on AI in education and the possible impact and implications on school business managers.    It was a lovely event and I really enjoyed Sarah Furness the keynote speaker, however, sadly I had to leave following my session in order to catch a train, one of a number of trains needed to get me to and from the event.

Be brave

Sarah was both insightful and entertaining and to be honest, I could likely write a whole blog post just on the stories she shared however let me just summarise my key takeaways from her presentation.    Her key message, which resonated for me, was the need to be brave, which aligns with the values of my school, and also is so very important where we have technology advancing at such a pace but with regulation lagging so far behind.   We have no choice but to be brave especially given both students and staff are already experimenting with the use of AI.  We need to be brave in engaging, we need to be brave in experimenting and we need to be brave in accepting where things don’t go quite as they planned, but learning from these experiences.   The need for sharing, asking difficult questions and accepting challenges also aligned with my thinking, and again looking to AI in education, if we are to find our way with AI in schools I think this all rings very true indeed.  We need to be sharing our thoughts, and both challenging and accepting challenges from others, if we are to move forward.    Sarah’s talk was about leadership, using her context as a military leader and pilot;  maybe this will be key in the use of AI in schools, the need for effective, brave leaders who value and encourage diversity, sharing and challenge.

AI in education

Going into my presentation my key aim was to discuss AI in education and some possible uses for school business leaders.   I don’t have all of the answers, and to be honest, I don’t feel anyone has all the answers when it comes to AI and education, as AI is advancing at a rapid pace where education has changed little and is under both funding and also workload challenges.   That said, as I shared in my presentation, “The smartest person in the room, is the room”.   This David Weinberger quote is one of my favourites and is often used, as it highlights the need to discuss and share, in doing so we hopefully engage others to think about the issue, in this case, AI in schools, and collectively our thinking, our ideas and experience is enhanced.

Now you can view my presentation slides here if you are interested.   

At the end of my presentation, a couple of questions were raised which I would like to just pick up on, namely school engagement in AI in education, policy and also regulation.  

School Engagement in AI

I would like to draw attention to the article in the Express which highlighted that 54% of the students they surveyed were using AI in relation to their homework.  The key thing here is that students are using AI independently of whether schools have considered or talked about AI.  And it isn’t just students, you will also likely have staff, both teaching and support staff who are using AI.   The AI genie is out of the bottle and attempts to block it will inevitably be futile so, in my opinion, it is key that we engage with the use of AI, we talk with students and staff about AI, and that schools experiment and share.    But the fact AI is already here isn’t the only reason to use it in education.   We talk about the need to support individual students, differentiation, English as a second language and also SEND barriers to learning; all of these can be addressed to some extent through the use of AI tools.   Now I will note here that the use of AI tools may also increase some challenges, such as that of digital divides, but that was a key part of my presentation in talking about the risks and challenges first, as we need to use AI but only from a position of an awareness of risks and challenges.

Policies

Linked to the above, I think it is very important that schools put in place an AI policy if they haven’t already done so.   This allows the school to set out its guardrails in relation to the use of AI in the school.  Now there is a brilliant template for this, as created by Mark Anderson and Laura Knight, which can be found here.   Looking to the future I suspect the AI policy might be eventually absorbed into the IT acceptable use and/or academic integrity policies however for now, while AI use in schools is so new, I think having it as a standalone policy makes sense.

Regulation

There will need to be some form of regulation in relation to AI tools including their use in education however we have already seen that the technology is developing very fast while the regulation is lagging so far behind and is slow to adapt.   As such I think we should hope for and support some form of regulation to protect people, including our staff and students, and their data, but I don’t believe we can wait for this to happen.    AI is already here and students and staff are likely using it.  We can’t stop this, so I think we need to run with it, to try and shape the use and hopefully in doing so shape the regulation which follows.  This will mean making risk v. benefit decisions but seldom do we see anything which is beneficial without any risks.

Conclusion

The OABMG conference was enjoyable even though my visit was brief.   It was good to get to share some thoughts on AI in education and I hope those in attendance found the session useful.   My two key thoughts from the event are, the need to be brave, remembering we learn most from our mistakes, and the need in this ever-busy and complex world to share as collectively we are all better for it. I think these are two things I will try do more actively in future.

Thinking about thinking (with AI)

Artificial intelligence (AI) is definitely the big talking point in educational circles at the moment.  You just need to look at the various conference programs and you will almost always find at least one session touching on AI or generative AI.   Now a lot of the discussion is focused on the possible benefits or the risks associated with AI and less so with the practical applications and need to experiment.   It was in thinking about the practical side of things, looking at tools like ChatGPT, Diffit, Gemini and Bing Image Creator among others, that I got thinking how AI might link to meta cognition.

Learning about learning

The idea of learning about learning, about meta cognition, has been around for quite some time.    The thinking being that if we educate students about how they learn and get them thinking about their learning preferences (eek, I almost said learning styles there!) then they can make informed decisions about their learning, and hopefully be better learners.   It seems to make sense.  But how does this link to AI and generative AI?

Learning with a learning assistant

I think the key issue here is how we see AI in terms of the learning experience.   Is it simply a tool to spark ideas?   Is it a tool to review content?   Is it a tool to surface information?   I would suggest it is all of these things and more, and in the case of generative AI can operate as an assistant to teachers or to students.   It is definitely more than a bit of technology or simply a tool as I suspect in its use its shapes our thinking and our processes, much as the simple tools like the hammer shaped human thinking and processes in the past.    We also need to consider that process when working with generative AI (GenAI) is often iterative or taking the form of a dialogue between the user and the genAI solution.  The user fields an initial prompt, to which the genAI responses.   The user then reviews the response against what they were hoping for, and if they are anything like me they realize that they haven’t been specific enough so therefore now provide further directives to the AI, which in turn returns a new, hopefully better response, and so the dialogue continues until an output which is satisfactory to the user is reached.     Now some of this dialogue can possibly be sped up through the use of various prompt frameworks such as the PREPARE framework shared by Dan Fitzpatrick, however even then it is still likely to be a dialogue with Dan also providing a framework for the review and iterative part of this process, his EDIT framework.

Meta AI supported cognition?

If we are looking to prepare students to work with generative AI as their always available assistant I think we also need to start exploring with students how best to use them.   Part of this is about looking at their learning and how their learning processes might be different with AI.   I suppose it’s a bit like if all your learning was done with a partner, with another human being.  Looking at the nature of the interaction, being very much a dialogue, makes this comparison feel all the more apt.   You would need to consider their approach, their emotions, social interaction, etc.   Now an AI doesn’t have emotions or the social side of things, or at least not yet or as we currently know these to exist, but it does have its own approach, its own biases, its own strengths and its own weaknesses.  So if we are using or encouraging students to use AI in learning, I think we need to work with student to unpick the processes rather than simply focusing on the tools.  If I am looking for ideas and to be creative, how best to I use AI?   If I am looking to review and improve my work, how best am I to use AI?    If I want to use AI for research, how best do I do this?    Is this where Meta AI supported cognition comes in?

Conclusion

In relation to technology use in education I have always said it isn’t about the technology but about what you are seeking to achieve.   With AI it might be using Gen AI to produce better coursework or to give you a starting point or some new ideas.    But if we think beyond the short term goals, isn’t it about being able to better use AI to suit our needs as they arise and as such do we then need to spend time with students unpicking the how of their use of Gen AI, understanding the processes, what works and what doesn’t in order to get better in working with our newly found AI assistant?

Might teaching about Meta AI supported cognition become a thing?

AI: Desirable Imperfection?

Might there possibly be benefits in generative AI solutions that hallucinate, make things up and show bias?

We live in a world of convenience;   Once upon a time we had to do research in a library, going through card indexes and looking at the bibliography from one book to identify further reading, which would then necessitate hunting in the library for additional books, and then you would need to summarise everything you read into your piece of work.     Then Google came along and we could do the search far faster, getting instant lists of articles or websites based on a search.   We still needed to look at the content which our searches yielded, before identifying the best source information and then moulding this into our own final piece of work.   Things had become more convenient which was good, but with this came some drawbacks.   As users we tended to look at the first set of results returned, at the first page of search results rather than at subsequent pages meaning we lost some of the opportunities for accidental learning where, in a library, your search for one book might lead you to accidentally find other books which add to your learning.   Also our searches were now being partially manipulated by algorithms as the search wasn’t just a simple search like that of a card index, it was a search which an algorithm used to predict what we might want, what is popular, etc, before yielding it as a search return.    And these algorithms reduced the transparency of the searching process, potentially meaning our eventual work had been partially influenced by unknown algorithmic hands.   Next we started the push for “voice-first” where rather than a list of search items our new voice assistant would boil down the answer to our requests to a single answer spoken with some artificial authority.

So roll in Generative AI and ChatGPT and Bard;  Now we have a tool which will search for content but will also then attempt to synthesise this into a new piece of work.   It doesn’t just find the sources it summarises, expands and explains.    Further convenience combined with further challenges or risks.   But what if there are benefits from some of these challenges such as the hallucinations and the bias?  Is that possible?

Lets step back to the library;   My search was based on my decisions as to which books to select, with my reading and book selections then influencing the further reading I did.    Now bias and error may have been in the books but I could focus on thinking about such bias and error, with error generally a low risk due to the editorial review processes associated with the publishing of a book.     In the modern world however my information might come to me via social media platforms where an algorithm is at play in what I see, choosing what to surface and what not to.   Additionally, content might be written by individuals or groups without the editorial process meaning a greater risk of error or bias.   And with Generative AI now widely available we might find content awash with subtle bias or simply containing errors and misunderstanding presented confidently as face.     As an individual trying to do some research I have more to think about than just about the content.  I need to think more about who wrote the content, how it came to me, what the motivation of the writer was, whether generative AI may have been used, etc.  In effect I need to be more critical than I might have been back in the library.

And maybe this is where the obvious hallucinations and bias is useful, as it highlights our need for criticality when dealing with generative AI content, but also with wider content available in this digital world such as the content which we are constantly bombarded with via social media.   In a world of ever increasing content, increasing division between groups and nations and increasing individuals contributing either for positive or sometimes malicious reasons, being critical of content may now be the most important skill.    

If it werent for these imperfections would we see the need to be critical, in a world where I suspect a critical view is all the more important? And can we humans claim to be without some imperfections? Could it therefore be that actually the issues or challenges of generative Ai, its hallucinations and bias, may be a desirable imperfection?  

Exams and AI: A look at the current system

I recently presented at a conference in relation to AI and assessment.   I think this was reasonably good timing given JCQ had just released further guidance in relation to student coursework and AI plus AQA had announced they were going to use online testing as part of their exam suite in the Italian and Polish GCSEs starting from 2016.    I think this is a positive step forward in both cases however I think it is important that we see this journey as more than simply replacing pencil and paper exams with a hall full of students completing the same exams but as an online/digital exam.   There is significant potential here to ask ourselves what are we seeking to assess, why are we seeking to assess it and how are we best to assess?

The SAMR model

The SAMR model is useful when looking at technology change programmes.   The first element of SAMR is that of simple substitution, similar to the example I gave above in the introduction.   The concern for me is that this might be the goal being aimed at where technology and AI present such significant potential beyond mere substitution, and where the world has moved at a fast technologically drive pace, yet our education system has changed little, and our key assessment methodologies, of terminal coursework and exams have barely changed at all.

In looking to progress beyond substitution it might be useful to unpick some of the limitations of the current system.  For this purpose I am going to focus purely on terminal exams given they are such a significant part of the current formal education system in the UK.   So what are the limitations of the currently accepted system?

Logistics

One of the key drawbacks in the current system, as I see it, is the massive logistical challenge it presents.   Students have to be filed into exams halls across the country and the world all at the same time, to complete exam papers which have been securely delivered to exam centres.    Its quite an undertaking and even more so when you consider trying to keep the papers and questions secure.   In a world of technology where content can quickly and easily be shared it doesn’t take much before questions are out in the open ahead of the exam, advantaging those who have seen the information when compared with those who have missed it.    Then you have the issue of gathering all the completed papers up, sharing them with assessors to mark, quality assurance of marking and then eventual release of results to students some months later.    This is a world where technology supports the sharing of information, written, audio, video and more instantly.  Why cant the exams process be quicker and more streamlined, making use of technology to achieve this?

Diversity

Another key drawback has to be that of diversity.  We, more than ever, identify the individual differences which exist in us all.    Discussion of neurodiversity is common at the moment but despite this we still file all students into a hall to complete the same exam paper.     Now there are exam concessions which can be provided to students but this barely scratches the surface in my opinion.    Where is the valuing of diversity in all of this?

Methodology

We also need to acknowledge that the current exams system very much values those students who are able to memorise facts, processes, etc.   Memorisation is so key to exams success however out in the real world we have access to ChatGPT and Google to find the information we need when we need it, with the key then being how we then interpret, validate and apply this information to the challenges or work in front of us.    Shouldn’t the assessment methodology align with the requirements of the world we live in?   Now I will acknowledge the important of key foundational knowledge so I not suggesting we stop teaching any basic knowledge, but knowledge and memorisation should be less of a focus than it is now.

Conclusion

I believe technology could address a lot of the drawbacks listed above.  Now I note the use of technology will present its own challenges but how often do we find the “perfect” solution?    Wouldn’t a solution which is easier for schools to administer, is quicker and more efficient, is more student centred and more in line with the world we now live in be a good thing?

AI and assessment (Part 2)

Following on from my last post looking at AI and assessment (see here) where I focussed very much on the high stakes world of terminal exams and coursework, I would now like to look towards formative assessment and the learning process.   As with my last post, this post aims at sharing some of the points I made at a recent conference where I spoke on AI and Assessment, presenting some questions which I believe we need to increasingly consider in a world of AI and generative AI solutions.

AI Supported Learning

Learning platforms and computer based learning have existed for some time.   And they havent and dont look like the image here. I remember having to do some Maths learning during my teaching degree using a computer based learning platform and that was in the mid to late 90’s.    At the time I wasn’t that fond of these learning platforms and this feeling stayed with me.  My issue was that the platforms although offering differing routes through the broad content, were largely linear in their offering in relation to each topic or even the smaller units of learning.    This couldn’t compare to a teacher delivering content where they could see students struggling and then instantly seek to adjust the learning content accordingly.

We have came a long way from there, with AI and generative AI now able to provide us with far superior learning platforms with my sense being that these platforms tend to break into two types, one where the AI is analysing usage and interaction data to direction learning content creators and one, the more recent and emerging type, where generative AI provides an AI based support, teaching or coaching agent.

In the model where the platform analyses usage and interaction data the key benefit is that this data is gathered from all users looking for those common patterns or anomalies, looking at issues such as general, language, nationality, and a variety of other factors to find which learning content works and which does not.    This allows creation of effective learning content based on a huge amount of data across many schools and many learners, far beyond the data that a teacher may have at their hands.   As such the content in these platforms progressively improves over time and based on data rather than intuition or other less tangible factors, which may be wrong, which a teacher may rely on.

Where generative AI is used students get a chat bot which prompts and support students as they work through the learning content, with the AI trying to mirror the supportive and coaching role of a teacher, but individualised for each student and available any time, anywhere assuming access to a device and internet connection.    I feel it is here that there is the greatest potential especially in relation to more fundamental skills and knowledge development, freeing up teachers to focus on more advanced concepts and also on wider issues such as resiliency, leadership, interpersonal skills, wellbeing, etc.    I note recently reading a post about a school which uses AI where they don’t have “teachers” instead having “guides”.    I suspect this sounds more radical that it is in practice especially the reported comment by the co-founder that “we don’t have teachers”.   My view is that AI learning platforms wont replace teachers, however through the use of AI learning platforms working with teachers we may be able to achieve more and quicker with our students.   I suspect the school is more akin to this partnership that the report would suggest however have no first hand experience of the school so cannot be sure.

Challenges

AI as a tool to assist and maybe guide and deliver learning delivers a number of benefits however I think it is important to acknowledge some of the challenges and risks.  We may not have a solution at this point however at the very least we need to be aware.

Bias is a clear challenge and something which has been widely reported in relation to AI.    In my session I asked a generative AI solution for a picture of a nurse and a picture of a doctor which the solution returning images where the doctor images were all of males and the nurse images all of females, and where all the images where of white people.    This experiment clearly shows bias however the challenge in AI powered learning platforms is that the bias may not be so easily visible.   What if the platform decides based on statistics that students from particular area, nation, gender, preference, age or other characteristic do generally worse than average.   The platform may then present them content it believes to be appropriate to this ability level, in doing so impacting their ability to achieve, the challenge they receive, and possibly causing a self-fulfilling prophecy.   And when a parent asks regarding a students learning path, is it ethical to use learning platforms if the use of a learning platform means we may not be able to explain the decisions taken in the child’s learning experience and journey, where these decisions were taken by AI?

Data is another challenge we need to consider here in the possible huge and growing wealth of data learning platforms might gather in relation to students.   This isnt just the data a school might provide such as name, email and age, but the data produced through each and every interaction with the platform, plus the data gathered as diagnostic data such as the device being used, IP address, etc.    And then there is the data a platform might be able to infer from the data gathered;   Could an IP address, which suggests a rough geographic location, a device type and internet speed allow you to infer the wealth of a user or users family?     I suspect it could.    Now consider the massive amount of data gathered over time, across different curriculum subjects and each use of the platform;   The potential for inference grows with each additional data point.   How do we manage the risks here in relation to data protection, cyber risk and also accidental or purposeful mis-use of the data?  If we are to use AI assisted learning solutions I think we need to ensure we have considered how we might do this safely.

Conclusion

Educations has had its challenges for some time including teacher recruitment, teacher workload and wellbeing, and equity of access to education.   Maybe AI can help with some of this and maybe AI risks making things worse in some areas;  It is difficult to tell, although the one thing we can tell is that AI is here and here to stay so I think we need to make the most of it and shape its use to be as positive and powerful as it potentially can be.   A difficulty here however is the slow pace with which education changes (little has changed in almost 100yrs!).   Now the pandemic did cause some change in my view, but some of that has rubber banded back to pre-covid setups.   The question now is, is AI the next catalyst for education change, will it impact education as much or more than the pandemic and will its impact be persistent beyond the initial “shiny new thing” period.   Only time will tell although my sense is there is potential for AI to answer in the affirmative to all three questions.


References:

A Texas private school is using AI technology to teach core subjects; A. Garcia (Oct, 2023), CHRON, Texas private school replaces teachers with AI technology (chron.com)

AI and assessment (Part 1)

I recently spoke at an AI event for secondary schools in which one of the topics I spoke on related to AI and its impact on Assessment.   As such I thought I would share some of my thoughts, with this being the first of two blogs on the first of the sessions I delivered..

Exams

Exams, in the form of terminal GCSE and A-Level exams still form a fairly large part of our focus in schools.  We might talk about curriculum content and learning but at the end of the day, for students in Years 10,11, lower 6 and upper 6 the key thing is preparing them for their terminal exams as the results from these exams will determine the options available to students in the next stage of their educational journey.   The issue though is that these terminal exams have changed little.   I provided a photo of an exam being taken by students in 1940 and a similar exam in recent terms and there is little difference, other than one photo being black and white and the other being colour, between the photos.   The intervening period has seen the invention of DNA sequencing, the mobile phone, the internet and social media, and more recently the public access to generative AI but in terms of education and terminal exams little has changed.

One of the big challenges in terms of exams is scalability.  Any new solution needs to be scalable to exams taken in schools across the world.  Paper and pencil exams, sat by students across the world at the same time accommodates for this.  If we found life on Mars and wanted them to do a GCSE, we would simply need to translate the papers into Martian, stick the exams along with paper and pencils on a rocket and fire them to Mars.   But just as it is the way we have done things and the most easily scalable solution doesn’t make paper and pencil exams the best solutions.   But what is the alternative?

I think we need to acknowledge that a technology solution has to be introduced at some point and the key point is the scalability based on schools with differing resources.   As such we need a solution which can be delivered in schools with only 1 or 2 IT labs, rather than enough PCs to accommodate 200 students being examined at once as is the case with paper based exams.  So we need a solution which allows for students to sit the exams in groups, but without compromising the academic integrity of the exams where student share the questions they were presented with.    The solution, in my view is that of adaptive testing as used for ALIS and MIDYIS testing by the CEM.   Here students complete the test online but are presented different questions which adapt to students performance as they progress.   This means the testing experience is adapted to the student, rather than being a one size fits all as with paper exams.    This helps with keeping students motivated and within what CEM describe as the “learning zone”.   It also means as students receive different questions they can sit the exam at different times which solves the logistical issue of access to school devices.   Taken a step further it might allow for students to complete their exams when they are ready rather than on a date and time set for all students irrespective of their readiness.

AI also raises the question of our current limited pathways though education, with students doing GCSES and then A-Levels, BTecs or T-Levels and then onto university.    I believe there are 60 GCSE options available however most schools will offer only a fraction of this.    So what’s the alternative?    Well CalTech may provide a possible solution;  They require students to achieve calculus as an entry requirement yet lots of US schools don’t offer calculus possibly due to lack of staff or other reasons.   CalTechs solution to this has been to allow students to evidence their mastery of calculus through completion of an online Khan Academy programme.   What if we were more accepting of the online platforms as evidence of learning and subject mastery?   There is also the question of the size of the courses;   GCSEs and A-Levels and BTec quals are all 2 years long but why couldn’t we recognise smaller qualifications and thereby support more flexibility and personalisation in learning programmes?   In working life we might complete a short online course to develop a skill or piece of knowledge on a “just-in-time” basis so why couldn’t this work for schools and formal education?  The Open University already does this through micro credentials so there is evidence as to how it might work.   I suspect the main challenges here are logistical in terms of managing a larger number of courses from an exam board level, plus agreeing the equality between courses;   Is introductory calculus the same as digital number systems for example?

Coursework

Coursework is also a staple part of the current education system and summative assessment.    Ever since Generative AI made its bit entrance in terms of public accessibility we have worried about the cheating of students in relation to homework and coursework.    I suspect the challenge runs deeper as a key part of coursework is its originality or the fact that it is the students own work but what does that look like in a world of generative AI.    If a student has special educational needs and struggles to get started so uses ChatGPT to help start, but then adjusts and modifies the work over a period of time based on their own learning and views, is this the students own work?   And what about the student who does the work independently but then before submitting asks ChatGPT for feedback and advice, before adjusting the work and submitting;   Again, is this the students own work?  

There is a significant challenge in relation to originality of work and independent of AI this challenge has been growing.   As the speed of new content generation, in the form of blogs, YouTube videos, TikTok, etc, has increased year on year, plus as world populations continue to increase it become all the more difficult to be individual.  Consider being original in a room of 2 people compared with a room of 1000 people;    The more people and the more content, the more difficult it is to create something original.   So what does it really mean for a piece of work to be truly original or a students own work?

The challenge of originally and students own work relates to our choice of coursework as a proxy for learning;   It isnt necessarily the best method of measuring learning but it is convenient and scalable allowing for easy standardisation and moderation to ensure equality across schools all over the world.   It is easy to look at ten pieces of work and ensure they have been marked fairly and in a similar fashion;  Having been a moderator myself this was part of my job visited schools and carrying out moderation of coursework in relation to IT qualifications.   If however generative AI means that submitted content is no longer suitable to show student learning, maybe we need to look at the process students go through in creating their coursework.    This however has its own challenges in terms of how we would record our assessment of process and also how we would standardise or moderate this across schools.

Questions

I don’t have a solutions to the concerns or challenges I have outlined, however the purpose of my session was to stimulate some though and to pose some questions to consider.    The key questions I posed during the first part of my session were:

  1. Do we need an annual series of terminal exams?
  2. Does there need to be [such] a limited number of routes through formal education?
  3. Why are courses 2+ years long?
  4. Should we assess the process rather than product [in relation to coursework]?
  5. How can we assess the process in an internationally scalable form?

These are all pretty broad questions however as we start to explore the impact of AI in education I think we need to look broadly to the future.    In terms of technology the future has a tendency to come upon us quickly due to quick technology advancement and change, while education tends to be slow to adapt and change.    The sooner we therefore seek to answer the broad questions or at least think about them the better.

AI risks and challenges continued

This is my 2nd post following on from my session speaking on AI in education at the Embracing AI event arranged by Elementary Technology and the ANME in Leeds last week.   Continuing from my previous post I once again look at the risks and challenges of AI in education rather than the benefits, although I continue to be very positive about the potential for AI in schools and colleges, and the need for all schools to begin exploring and experimenting.

Homogeneity

The discussion of AI is a broad one however at the moment the available generative AI solutions are still rather narrow in their abilities.   The availability of multi-modal generative AI solutions is a step forward but the solutions are still rather narrow, being largely focussed on a statistical analysis of the training data to arrive at the most probable response, with a little randomness thrown in for good measure.     As such, although the responses to a repeated prompt may be different, taken holistically they tend towards an average response and here in lies a challenge.   If the responses from generative AI tend towards an average response and we continue to make more and more use of generative AI, wont this result in content, as produced by humans using AI, regressing to the mean?   And what might this mean for human diversity and creativity?    To cite an example, I remember seeing on social media an email chain where an individual replied asking the sender to not use AI in future, to which the sender replied, I didn’t use AI, I’m neuro-diverse. What might increasing AI use mean for those who diverge from the average, and what does it even mean to be “average”?

Originality

The issue of originality is a big one for education.   The JCQ guidelines in relation to A-Level refer to the need for “All coursework submitted for assessment must be the candidate’s own work” but what does this mean in a world of generative AI?   If a student has difficulty working out how to get started and therefore makes use of a generative AI solution to get them started, is the resultant work still their own work?   What about a student who develops a piece of work but then, conscious of their SEN needs and difficulties with language processing, asks a generative AI solution to read over the content and correct any errors, or maybe even improve the readability of the piece of work;  Is this still the students own work?   Education in general will need to seek to address this challenge.   The fact is that we have used coursework assessment evidence as a proxy for evidence of learning for some time, however we may now need to rethink this given the availability of the many generative AI solutions which are now so easily accessible.    And before I move on I need to briefly mention AI and plagiarism detection tools;   They simply don’t work with any reliability, so in my view, shouldn’t be used.   I don’t think there is much more that needs said about such tools, other than that.

Over-reliance

We humans love convenience however as in most, if not all things, there is a balance to be had and for every advantage there is a risk or challenge.   As we come to use AI more and more often due to the benefits we may become over-reliant on it and therefore fail to consider the drawbacks.   Consider the conventional library based research;  When I was studying, pre-Google, you had to visit a library for resources and in doing so you quite often found new sources which you hadn’t considered, through accidentally picking out a book or through using the reference list in one book, leading to another book, and onwards.   The world of Google removed some of this as we could now conveniently get the right resources from our prompts.   Google would return lists of sources but how many of us went beyond the first page of responses?    Now step in generative AI which will not only provide references but can actually provide the answer to an assignment question.    But the drawback is Google (remember Google search uses AI) and now Generative AI may result in a reduction in broader reading and an increasingly reliance on the google search or generative AI response.   Possibly over time we might become less able, through over-use, to even identify when AI provides incorrect or incomplete information.   There is a key need to find an appropriate balance in our use of AI, balancing its convenience against our reliance.

Transparency and ethics

Another issue which will likely grow in relation to AI is that of transparency and of ethics.    In terms of transparency, do people need to know where an AI is in use and to what extent it is used.   Consider the earlier discussion of student coursework and it is clear that students should be stating where generative AI is used, but what about a voice based AI solution answering a helpline or school reception desk;   Does the caller need to know they are dealing with an AI rather than a human?   What about the AI in a learning management platform;  How can we explain the decisions made by the AI in relation to the learning path it provides a student?  And if we are unable to explain how the platform directs the students and therefore are unable to evidence whether it may be positively or negatively impacting the student, is it therefore ethical to use the platform?   The ethical question itself may become a significant one, focusing not on how we can use AI but on should we be using it for a given purpose.     The ethics of AI are likely to be a difficult issue to unpick given the general black-box nature of such solutions although some solutions providers are looking at ways to surface the inner workings of their AI solutions to provide more transparency and help answer the ethical question.   I however suspect that most vendors will be focussed on the how of using AI as this drives their financial bottom line.   The question of whether they should provide certain solutions or configure AI in certain ways will likely be confined to the future and the post mortem resulting from where thing go wrong.

Conclusion

As I said at the outset I am very positive about the potential for AI in education, and beyond, but I also believe we need to be aware and consider the possible risks so we can innovate and explore, but safely and responsibly.

   

AI risks and challenges

I once again had the opportunity to speak in relation to AI in education earlier in the week, this time at the Elementary Technology and ANME event in Leeds.   Now this time my presentation was very much focussed on the risks and challenges of AI in education rather than the benefits, leaving the benefits and some practical uses of AI to other presenters and to the workshop style sessions conducted in the afternoon.    This post marks the first post of two looking at the risks and challenges I discussed during my session.

Bias

The potential for Bias in AI models and in particular in the current raft of Generative AI solutions was the first of the challenges I discussed.   In order to illustrate the issue I made use of Midjourney asking it separately for a picture of a nurse in a hospital setting and then for a picture of a doctor, in both cases not stating the gender and allowing the AI to infer this.   Unsurprisingly the AI produced 4 images of a female nurse and 4 images of a male doctor easily demonstrating an obvious gender bias.   Now for me the bias here is obvious and therefore easily identified and corrected through an appropriate prompt asking for a mix of genders, but such bias are not always so identifiable.   What about the potential for bias in learning materials presented to a student via an AI enabled learning platform, or the choice of text returned to a student by a generative AI solution?   And if we cant identify the bias how are we to address it?    I will however note at this point we also have to consider human bias, as it is unfair to expect an AI solution to be without bias when we developed the solution, provided the training data, etc, and we are not without bias ourselves.

Data Privacy

Lots of individuals, including myself, are already providing data to AI solutions but do we truly know how this data will be used, who it might be shared with, what additional data might be inferred from it, etc, and we need to know this now as it is currently, but also the future intentions of those we provide data to.   The DfE makes clear that school personal data shouldn’t be provided to generative AI solutions however what if attempts are made to pseudonymize the data;  What level of pseudonymization is appropriate?   And then there is the issue of inferred data;  I recently heard the suggestion that, if we fed all of our AI prompts back into an AI solution and asked it to provide a profile for the user, it would do a reasonable job of the task possibly identifying age, work sector and more.    AI and generative AI offer a massive convenience, efficiency and speed gain however the trade off is giving more data away;  Is this a fair trade off and one which we are consciously accepting?

Hallucinations

The issue of AI presenting made up information was another one which I found easy to recreate.   I note this is often referred to a “hallucination” however I am not keen on the term as it anthropomorphises the current generative AI solutions we have when I still believe the solutions we currently have are still narrow in terms of their focus and therefore more akin to Machine Learning, a subset of the broader AI technologies.   To demonstrate this issue I used a solution we have been working on which helps teachers generate parental reports, putting a list of teacher provided strengths and areas for improvement into readable sentences which teachers can then review and update.    We simply failed to provide the AI with any strengths or areas for improvement.   The AI however still went on to produce a report however in the absence of any teacher provided strengths or areas for improvement, it simply made them up.    For me this highlights the fact that AI solutions cannot be considered as a replacement for humans, but instead are a tool or assistant to humans.

Cyber

The issue of cyber security or Information security and AI is quite a significant one from a variety of different perspectives.    First there is the potential use of AI in attacks against organisations including schools.   The existence of criminally focussed generative AI tools has already been reported in WormGPT and FraudGPT.    Generative AI makes it easy to quickly create believable emails or usable code, independent of whether the purpose of the code is benign or if it for a phishing email or malware.    Additionally there is the issue of AI as a new attack surface which cyber criminals might seek to leverage.   This might be through the use of prompt injection to manipulate the outputs from AI solutions, possibly providing fake links or validating organisations or posts which are malicious or fictious.   Attacks could also involve poisoning of the AI model itself, such that the models behaviour and responses are modified to suit the malicious ends of an attacker.   And these are only a couple of implications in relation to AI and cyber security.

Conclusion

I think it is important to acknowledge that my outlook on AI in general and in education is a largely positive one, however I think it is important that we are realistic and accept the existence of a balance; a balance between the benefits and the risks and challenges, where to make use of the benefits we need to be at least aware and consider the possible balancing drawbacks and risks. This post therefore is about making sure we are aware of the risks, with my next post digging into a few further risks and challenges.

As Darren White put it in his presentation at the Embracing AI event, “Be bold but be responsible”

Using AI: Preparing a conference presentation.

Last week I presented at a conference, speaking about AI in education, so what better way to create the presentation than to actually use AI tools.   So, I thought I would share some experiences of the process.

The main tool I made use of in preparing my presentation was Canva which I became aware of after seeing Darren White do a short demo of it at a meeting a couple of months ago.    Canva allowed me to get the ball rolling quickly and easily, using their Magic Create functionality to create the bare bones of my presentation including some nice graphics with the only requirement from my end being a single sentence as a prompt.

Now, the presentation needs to be something I was happy delivering, something that includes a bit of my identity, experience and outlook.   The Canva AI generated presentation, although it included a simple structure with some key points, it just wasn’t me.    But it did give me a good starting point including graphics after maybe 1 or 2 minutes of effort as opposed to the ½ hour it would likely have taken me to get to that point.

At this stage I set about moving slides around and adding new slides to build a structure for the session which felt a bit more like me and something I would present.  I note, I could have possibly refined my prompt and worked at it that way, however for me it was easier to work directly with the slides as I sought to align them with the thinking in my head, where sometimes it wasn’t the slides on the screen which were being reordered but the order in my head.    As I continued to work on this the presentation started to take shape.    Finding graphics and images for the slide was easy using Canva’s search tools, and from there it was easy to drop images straight into my presentation, and where the images werent quite right I could easily change them using the AI image editing tools in Canva.   I could easily remove elements of an image or change elements at will in order to get me the image I needed, which best suited the slide I was working on.

Additionally I made a little use of MidJourney and DALL-E2 to generate additional impacts, plus used ChatGPT for the development of additional text content and some of my script.   As with most technology usage, it was switching between different generative AI tools for different purposes and I suspect, I could have used even more apps if I felt appropriate;   I note I suspect the core of Canva, ChatGPT, MidJourney, DALL-E and maybe Bard should be good enough for most possible purposes.

Did the AI tools do the job for me?  

No, I was looking to create a presentation, where I would be presenting my thoughts and ideas.   Generative AI doesn’t (yet) have access to my thinking, where this thinking was constantly changing and evolving, with me refining my message as I built my presentation.    What generative AI does provide however is tools to make things easier, quicker and more efficient, so I could create the bare bones of a presentation in a couple of minutes rather than 30 minutes.   I could find images and quickly insert in moments rather than spending time searching via google or image tools, plus I could easily change images to suit my needs including changing their composition, all taking minutes rather than the hours this might have taken me in the past manipulating images in Photoshop.

Generative AI is a powerful tool to help me do the basics quickly allowing me to spend more time making the presentation I was creating a reflection of me, of a human being with experience, skills and a personal, albeit often changing, outlook on the world, on education and on technology.

Conclusion

Now I hope the presentation was well received but only the feedback will tell me that, although it did seem to go reasonably well.  I suspect through the use of generative AI tools I spent less time on the actual slide designs and more time on the actual content of the session and on what I was going to say.   Hopefully this made for a more engaging session.    I think the key takeaway as that AI, as it is now, doesn’t do things for you, it isnt close to replacing us humans, but it can make us more effective and efficient.  It makes me think back to that old quote about teachers and tech;   Technology wont replace teachers but teachers that use tech will replace those who do not.    In the world of generative AI the word “technology” can be replaced by what I believe to be one of the most disruptive technologies we have seen an decades;  AI.    The question therefore is how do we ensure the disruption is to the betterment of us as individuals, as groups and organisations, and society as a whole.  How do we use and work with AI, while being aware and conscious of any risks or drawbacks?

AI in education

Last week I presented at an event for schools, speaking in relation to AI in education.   As such I thought I would share the main points from my session.    Now the session itself was broken into three main sections, being some context, the short term implications and the implications beyond the short term. 

Context

The first point I made was on the current post ChatGPT discussion in relation to AI, and how AI itself isnt new.   In fact solutions we use in everyday life, such as Siri and Alexa, such as Google Maps and search and facial recognition all make use of AI.    Although generative AI began to be so easily accessible in ChatGPT in November of 2022, AI had been around for quite a while prior to this and had already formed a big part of our lives.   I also acknowledged that independent of whether schools do anything in relation to generative AI, including ChatGPT, our students will largely already be using these solutions;  An examination of internet traffic in my own school saw an increase in student daily use between Jan and March 2023, at which point we stopped tracking the data as generative AI started to appear in many different solutions.    And this is a key point, that if schools do nothing, and leave the use of AI solutions to chance, both in the hands of their teachers and their students, AI solutions will be used whether this is appropriate, safe or not.

In looking more broadly at AI, I would suggest that it represents a continuum between extremes of narrow AI solutions, which are capable of a single activity, up to the holy grail which is AGI (Artificial general intelligence) where the AI solution is capable of the broad spectrum of human activities.   Where we are currently is heavily down the left hand, and narrow AI side of things and I suspect we will be there for a while.   Looking at the responses of 350 AI experts in relation to when there will be a 50% chance of an AGI existing, 50% said this would occur in the next 40yrs however to increase the confidence to 90% of experts, you need to look out to 100 years time.   There is little consistency in the responses other that almost all of the experts predicted AGI would occur at some point in the future.

The short term?

Coming back to the present day and the challenges of generative AI, it is also important to acknowledge the challenges in education more generally.   A 2022 Teacher Wellbeing Index showed 59% had considered leaving the education sector during the year due to mental health and wellbeing pressures while 68% said volume of workload was an issue making them consider leaving the profession.   And it is here that maybe AI can start to help in addressing some of the workload issues, and through this hopefully reducing stress and pressures on mental health.   Through the use of AI, administrative burdens such as policy and resource creation, marking, parental reports, meeting minutes and reading of minutes, and many other takes can be lightened.  Now in all cases there still needs to be a human element to review, amend and improve AI generated content, but through humans working with AI tools we should be able to accomplish things quicker and easier.  

And generative AI isnt limited to the boring and administrative tasks, but can also help with the creative tasks, which in my case, I am not particularly strong at.   Being a poet, artist, musician, videographer and similar has never been my strong suit however with AI I can create things which previously may not have been possible.  Having asked ChatGPT for a poem on the impact of AI on education, for example, I was impressed by the output.

So what are schools to do?

I think the first thing is to acknowledge that AI comes with risks and benefits and that you cannot have one without the other.   As such the first thing a school needs to do is have a discussion and establish what their risk appetite is.    Does the school want to make the most of all the benefits of AI, and therefore is willing to accept some degree of risk, or is the school risk averse and therefore not willing to risk making use of AI?    Once risk appetite has been established it is now possible to set some ground rules an guidance for staff and students and this is where schools need to put an appropriate policy in place in relation to AI use.    This policy should cover things such as the legal implications, such as GDPR and data protection related, the ethical considerations and also how risks and benefits need to be considered.  Equally any policy in relation to AI needs to be aligned with the wider school values and vision, plus needs to be regularly reviewed and updated.    Now that a policy is in place, the next step is to speak to students and staff about AI, about the risks and benefits and about the policy requirements.   Once basic awareness is in place, you can then begin exploring and experimenting with AI solutions including the many generative AI solutions which are now so freely available.

Beyond the short term

Moving beyond the now and the short term where we can clearly establish some steps which schools should be taking, we move into the more unpredictable future, where the questions are more questions for education in general, rather than things schools can easily individually action. 

One of the first challenges or questions relates to originality and we are already seeing this in the actors strikes and in a number of copyright actions being taken against Generative AI vendors.   What does it mean to be original in the world of ready access to generative AI?   The JCQ guidance for example states that “All coursework submitted must be the candidates own work” but does that mean a student cant used generative AI to help or as a starting point, or a dyslexic student cant get the help of generative AI?   And to complicate matters even further, consider what being original might mean in the time of the romans;   Basically you couldn’t write or say the same as someone else, but at that time there were less people in the world and little was written down for comparison.   Now we live in a world with more people, writing more often and in more forms than ever before and that’s even before we consider how people might now use generative AI to create yet more content, much quicker than they did before.   So what chance do you we have in being original or presenting our “own” work?   This is a big challenge for education, especially given our current system uses coursework as an easy proxy for learning.

We also need to consider how the fundamental process of education, with students going to schools, colleges and universities may need to change.   A perfect example is how students unable to study calculus at school can now meet the requirements of CalTech in relation to calculus through the use of 2 online platforms.  Basically, students can prove their master online rather than in a school, then progress onwards to CalTech.    It is likely therefore that new avenues of education progression, access to education and whole new programmes may appear as we move forward, but how will this impact the schools and colleges we have today;  It may be in the future that they look significantly different to how they look today.

And the third of my future gazing thoughts, and the most significant in my eyes, is the access to online AI based tutoring for students.  This potentially provides every student with 1:1 access to support for their learning rather than the division of a teachers time across the whole class.    Additionally, this support is available 24/7/365.   This will likely impact on the core subjects at basic learning levels initially so basic maths, English and science in the first instance before broadening out to other subjects.   It may be this online personalised education which has the biggest impact, freeing up teachers to focus on some of the areas which have long needed time in the curriculum, but long gone without;   mental health, resilience, digital citizenship among other areas.   It will allow teachers to spend more time on the things which matter most about being human, having had time freed up by AI in relation to the things an AI can do reliably well.

Conclusion

AI is here now so all schools need to act as staff and students will use the available tools as they see fit if they do not receive any training or guidelines from the school.   As such all schools, in my view, should have a policy on AI use within the school as a minimum.   In terms of the potential of AI as is now available, I referenced my own use of Canva, ChatGPT, MidJourney and DallE-2 in the creation of my presentation and presentation content.

Looking out beyond the short term, things are not quite as certain with more questions than answers.  One thing I think we can be reasonably sure of is that AI’s impact on education will only increase and it may lead to some fundamental questioning of our current educational system and approaches to education.   And at some point in the future the singularity, where AI intelligence exceeds that of humans, will likely be reached and at that point I suspect the world, and education, may look very different to today.