AI and Marking

Given the concerns in relation to teacher workload, and you just need to take a quick look at the teacher wellbeing index reports to see this, it is clear that we need to look find solutions to the workload challenge.   Artificial intelligence (AI) is one potential piece of this puzzle although is by no means a silver bullet.    The issue I have come across on a number of occasions is concerns regarding some of the challenges in relation to AI such as inaccuracies.  I avoid talking of hallucinations as it anthropomorphises AI;  The reality is that its probability algorithm just outputted something which was wrong so why cant we simply say AI gets it wrong occasionally.    And we are right to have concerns about where an AI solution might provide inaccurate information, especially where it might relate to the marks given to student work or the feedback provided to parents in relation to a students progress.   But maybe we need to stop for a moment and step back and look at what we do currently.    Are our current human based approaches devoid of errors?

I did a quick look on google scholar and found a piece of AQA research from 2005 looking at Markling reliability and the below is the first line in the conclusion section of the report:

“The literature reviewed has made clear the inherent unreliability associated with assessment in general, and associated with marking in particular”

We are not talking about AI based marking here, we are talking about human based marking of work.     We are by no means the highly accurate marking and assessing machines we convince ourselves we are.     And there are lots of other studies which point to how easily we might be influenced.  I remember one study which focussed on decision making by judges where, when they analysed the timing of different decisions, they found that the proximity to a courtroom lunch break had a statistical impact on judges decisions.   Like marking, we would expect a judges decision to be independent of the time of the decision, and to be consistent, however the evidence suggests this isn’t quite the case.    Other studies have looked at how the sequence which papers are marked in can have an impact on marking, so the marking of a paper following a really good or poor paper, will be impacted by the paper which proceeded it.    Again this points to inconsistency in marking.    Also, that if the same paper is presented to the same marker on different occasions over a period of time, different marks result where if we were so accurate in our marking surely the marks for the same paper should be the same.

It seems clear to me that we are not as accurate in our marking and assessment decisions as we possibly think we are.   I suspect, calling out AIs inaccuracies is also easier than calling out our own human inaccuracy, as AI doesn’t argue back or try to justify its errors, in terms of justifying to us, or even internally justifying how the errors are valid to itself.  And this is where a significant part of the challenge is, in that we justify and convince ourselves of our accuracy and consistency, where any objective study would show we aren’t as good as we think we are.   When presented with such quantifiable evidence, we then proceed to generate narratives and explanations to justify or explain away any errors or inconsistencies, so overall our perception of our own human ability to assess and mark student work is therefore that we are very good and accurate at it.  AI doesn’t engage in such self-delusion.

Conclusion

In seeking to address workload and in considering the use of AI in this process we need to be cautious of wanting to get things 100% right.   Yes, this is our ideal solution but our current process is far from 100% right so surely we need only be able to match our current accuracy levels but with a reduced workload for teachers.    Now it may be that the AQA research may present the answer in that “a pragmatic and effective way of improving marking reliability might be to have each script marked by a human marker and by software”.   Maybe rather than looking for AI to do the marking for us, it is about working with AI to do the marking, using it as an assistant but ensuring human insight and checking is part of the process.

And I also note that the above applies not just to the marking of student work but also to the use of generative AI in the creation of parental reports, another area of significant workload for teachers.   Here also an approach of accepting the frailties of our current approach then seeking to use AI to achieve at least the same level of consistency while reducing workload seems appropriate.

Maybe we need to stop taking about Artificial Intelligence and talk more about using AI to create Intelligent Assistants (IA)?

References:

A Review of the literature on marking and reliability (2005), Meadows. M. and Billington. L., National Assessment Agency, AQA

Reflecting on 2023/24

And so another academic year draws to a close so I thought I would share some initial reflections:

Artificial Intelligence (AI)

AI continued to be a big topic of discussion throughout 2023/24 and saw me speaking to school leaders, teachers but also to school support staff on its potential as well as the risks and challenges.    I think like a lot of Tech AI has those which are heavily engaged and supportive of its use, then a larger body with are unsure or reluctant, followed by those that are anti and against its use.    As such at the moment the impact of AI when viewed generally, may appear less than its potential or what those positive about AI are proclaiming.   For me it is about getting more staff experimenting and finding out how AI can benefit them in schools, often in little and simply ways rather than the flash examples we often see.   Personally, I am slowly introducing greater use of AI into my various workflows and in doing so seeing benefits in time saved but also, and possibly more importantly, on the quality of outputs.    By using generative AI to assist me, AI and I (??) are coming up with more ideas, using a wider vocabulary range, creating better graphics and reducing errors among other things.    Two heads, even if one is a headless AI, are better than one!

Digital Divides

I have already mentioned AI and generative AI but it represents yet another aspect to the issue of digital divides.    Technology, and generative AI has such potential to enable greater creativity, address imbalances such as those related to SEND or to language proficiency, support collaborative and communication and much more.   But you need to have access to the technology, the software, the hardware, the internet bandwidth and more, with this access often the product of a digital strategy or plan, and the relevant budget and finance.   You also need access to support and help, to a culture which embraces the potential of technology and generative AI, at school but also add home and in your local community, friends and colleagues.    The number possible divides between those that have and those that have not is only increasing, and the magnitude of each divide is only widening with each passing day as those that have, experiment, adapt and innovate, while those that have not are held back, continuing to work in ways that are long established, as the world around them changes.

Digital Future Group (DFG), collaboration and sharing

Generative AI advancement is just one indicator of the increasing pace of technology change, with resulting impact on society more broadly.   But how can we keep up with these dizzying advancements and changes?    Can one person keep up with all the apps, the tools, the different approaches?  For me the key here is in approaching the problem collectively and collaboratively rather than individually.   For example, this year has seen the creation and my involvement in the Digital Futures Group, which is all about sharing and networking.   I am so blessed to be part of a group of professionals who operate across different contexts, in different roles and with different skills and experience, across the UK, and I am better for my involvement.   I have also had the pleasure of being involved with the Association of Network Managers in Education (ANME) and also the Independent Schools Council (ISC) Digital Advisory Group.    Each of these organisations has allowed me to network with some amazing people, gaining from their experiences, their values and views, the knowledge and maybe even their humanity, and in a world of increasing use of AI, maybe our humanity, and sharing our humanity is all the more important.

Wellbeing

I think another reflection on the academic year relates to wellbeing, and it may be that this jumps to mind due to a recent presentation at the ANME South West event in relation to wellbeing and IT support staff.    This academic year has for me been a very difficult one personally with a major personal upheaval hitting me in the first term, something that I am not sure I have fully recovered from as we complete the final term.   Actually, thinking about it I don’t think it is about recovery but more about adapting to the changed circumstances I now find myself in.   This has forced me to stop and reflect and in doing so identify a lot of personal practices and habits I have developed which have led to an unbalanced life.   It took a significant life event to make me stop and reflect such that I am now trying to rebalance and establish new habits.   This has also got me thinking about the “be more efficient” narrative and what it means to be a good employee.   I get the concept of being more efficient and therefore doing things quicker or using less energy, etc, but if all this leads to as being asked to do yet more things, surely this isn’t a sustainable model.    Laura Knight talked about resilience and how this isn’t something we should aim for as its ok to being resilient to short term issues, but having to constantly exist in a state of resilience isn’t really living and in all honesty isn’t something we can do for any significant period of time.    For me “efficiency” may suffer similar problems.    And if being efficient is driven by an organisational need for efficiency, so does this mean that to be a good employee I need to be efficient and get more done than others, and if this is the case does it not possibly drive unsustainable hours, stress and workload issues.   So maybe schools and other organisations need to consider what it is to be a good employee, with leaders modelling this and with the expectations clearly espoused.  Maybe we also need to stop and identify what really matters, rather than constantly adding more tasks, more requirements and more considerations to our everyday roles.

Conclusion

This for me has been a year of difficulties but also of a greater sense of community and collaboration.   AI, Digital Divides, Networking and Wellbeing are definitely the four themes which currently stick out for me from what has been a busy academic year, but then again when are academic years in schools or colleges not busy?    Am hoping that 2024/25 will be another positive year and soon enough it will be upon us.   I am going to post in the coming weeks a month by month review of some of my highlights for the year including some photos but for now let me just wish everyone a good holiday period acknowledging that myself and my team, plus many others, particularly IT teams, will actually be working much of the holiday period on IT upgrades and many other things, ahead of the new academic year.    All the best to all.

AI and collaborative planning

The other day I was lucky enough to share my thoughts with a number of schools in relation to the use of Generative Artificial Intelligence within teaching and learning.   My session included a brief introduction into Artificial Intelligence (AI) and Generative Artificial Intelligence, before talking about the benefits and, risks and challenges.    I also talked about a few Generative AI tools.

In delivering the session I spotted that some of the later sessions on the event agenda including time for colleagues from across different schools to get together in their subject area to work collaboratively, to share ideas, resources, etc.    And it was at this point that I saw a clear potential use for generative AI.

We are in that final part of the academic year where many teachers are already considering the 2024/25 academic year and putting their planning into place.   It has always been beneficial to do this collaboratively with other teachers, either in your school, across schools in a Multi-Academy Trust or school group, or even beyond.    The bigger and more diverse the group of teachers sharing the better.    It all fits with my favourite quote, that “the smartest person in the room, is the room”.   So, the more people we are working with collaboratively and sharing ideas with, the smarter we are collectively.   But why does it need to be just about people.   

In the event I spoke at I suggested that staff might want to invite ChatGPT or Gemini into their collaborative planning sessions, getting some input and ideas from Generative AI or they might use Generative AI to help in further developing ideas they themselves have come up.    A Generative AI tool would bring additional ideas from the very broad training data it will have ingested and therefore may propose things a group of teachers might not consider. Ok, so it isn’t another person in the room, but it is another intelligence or it is an intelligent assistant to the people in the room.   Maybe the intelligent assistant, such as ChatGPT, makes each individual “smarter” and then the collaboration of the room again makes people collectively smarter still.

Collaborative planning among teachers is now a largely established habit in schools so maybe we can augment the practice by getting into the habit of bring generative AI into these meetings, these discussions and these collaborative events.     If a room of people is smarter than the individuals, then a room of people, supported by intelligent assistants, supported by generative AI tools, is clearer smarter still.

So, if you are doing any collaborative planning in the weeks ahead, or at the start of the new academic year, are you going to involve Generative AI tools in the process?

AI Virtual Friends

Ever since ChatGPT made an appearance various companies and individuals have sought to make use of generative AI to create the next big app, the next TikTok or Angry Birds Space, or other viral app.   One area which I looked into, was the growth in virtual friend apps.    Initially, my thinking was that virtual friend apps might be a useful tool, particularly for shy students or those who need some guidance or support in relation to social interaction.   A virtual friend might provide this.    As I looked into the various apps available however I quickly became a little worried about some of the apps now available, including to children.    Now my concerns seem to align with a news story I read in relation to a rise in school sexism being attributed to phones plus I have had my own concerns in relation to the various algorithms built into social media platforms and how they seek to keep us glued to their platforms, through fielding the content which they think we want to see.   And remember it works out what we want to see by past history, where sexism and other bias was rife, and based on what people access and, as the saying goes, “sex sells”.   

Below I will just share some of my initial thoughts and I have included some screenshots which some may consider distasteful however I include them to demonstrate some of the issues with the various apps I stumbled upon.

Gender Bias

One thing that was particularly obvious was the gender bias in the branding and advertising of the apps.   From what I saw there was a bias towards apps designed to appeal to males using female imagery although this may be due to apps identifying my gender from tracking info.    Additionally, the imagery used, in both males and females, was very stereotypical from an appearance and from a race point of view.    In terms of being representative, the images were far from a representation of the population, pointing towards an unrealistic body image where which could have a potentially significant impact on impressionable children.   They also tended towards using imagery which appeared to show individuals in their late teens and early twenties which in turn might encourage children to experiment with these apps, even although they may not be the target audience, or at least the apps may not admit this is their targeted audience.

Encouraging potentially unacceptable behaviours

Another concern I had was with apps suggesting that the AI virtual friend would pander to the users every desire and whim.    From the point of view of young children seeking to explore boundaries the provision of an AI friend that might encourage or support this beyond the point of acceptable reason, as based on their age, is a concern.    The world will always come with rules and boundaries and it is important that students are aware, so virtual friends that model the breaking or non-existence of boundaries could encourage risky behaviours in the real world.

NSFW

The term, NSFW, or Not Safe For Work appeared within the adverts for a few of the virtual friend apps and with one app there was even the mention of “Barely Legal”.   Where these apps have few if any safeguards in relation to use by children, this is of significant concern.   It is also noted that the NSFW isn’t meant as a warning but as an enticement and therefore this might encourage young children to try these apps, where the content or even behaviour of the AI chatbot is not age appropriate.

Blurring the boundaries between AI and reality

One of my concerns in looking at the virtual friends apps was the potential for children to become confused and for the boundary between the fake virtual friend and real friends to blur such that the child may act inappropriately in the real world based on activities which a virtual friend was willing to accept in the virtual world.   It was therefore a little worrying to actually find one app using this in their advertising and questioning “what is real anyway?”   

Conclusion

Now I think it is very important to note here that I suspect there are some possible positives which could result from virtual friends such as solutions which direct individuals to support services or provide support and advice themselves.   Or apps that provide friendship or moral support, but within reason.   That said there are equally apps which clearly are focused on baser human tendencies and which seek to make money by playing to this.   These apps, although not necessarily aimed at children, could easily fall into the hands of children with a resultant potential for harm. I didn’t see any evidence of age verification in the apps I looked at.

Now, I didn’t spend a significant time playing with any of the apps, rather just looked to see what apps were available and how they were advertised via social media, so I cannot say much about their potential to hold the attention of users including children however based on my brief investigation the discussion was rather bland;  That said a lot of the text messages I routinely share may be considered bland and it may be that this changes as you provide more data and interact over a longer period, plus as the AI models themselves continue to develop and improve.   I also note that there were so many of these apps, often with apps with different names but from the same provider, and generally these apps were not the product of the big tech companies.   I would suspect some of them may even be the product of cyber criminals seeking to harvest data.

If I was to propose my main concern, it is the two-way nature of the communication.   Up until this point inappropriate content, such as porn, is very much a one-way communication, being the consumption of content.   With AI there is now the potential for this to become two way making it more akin to the normal interactions we may have in our day to day lives.    For children I suspect this could make these apps all the more dangerous as this may impact their norms and views as to what is acceptable.

Is doing more and efficiency our aim?

I have long been concerned by the “do more”, and “be more efficient” narrative which seems to surround our everyday lives.   We are constantly seeking to improve in all we do, which I think is a fair endeavour, but at what cost?   This was recently brought further into focus as I started reading “Thank You for Being Late: An optimists Guide to Thriving in the Age of Accelerations” by T.L. Friedman as I found myself with an hour to spare while waiting to meet someone.   I found myself that bit more content and relaxed as I used the extra hour which had become available to start reading the book and to engage in a bit of people-watching, watching the world rush about its business.  But are these opportunities to stop and reflect reducing in frequency and length?

I look at teaching for example, where I qualified as a teacher back in the late 90’s.   Looking at teaching now, there are so many more things to consider and to do whether this relates to educational research that we are considering, safeguarding, well-being, health and safety, neurodiversity, and much more.  Now all of these things are important but each is another thing to consider, additional cognitive load, or an additional process or task which needs to be completed.  Is there an extra resource in terms of time or cognitive capacity to undertake these things?   The answer is No.   We simply fold them into our everyday workload, which invariably means that although our efforts are getting better, we are also doing more than we ever did before.

Now generative AI can help a little here in that it can help us with some of the heavy lifting and free up some time for us.    This particular post was edited with the help of AI although it wasn’t initially drafted with AI;  I didn’t draft it with AI as this is very much a brain dump of thoughts and as yet AI solutions can’t interface with the human brain, although that may become possible at some point.    But in editing it with AI, I was able to proofread and make changes quicker than I would have been able to do myself therefore reducing the time taken to produce the post.    The challenge here however is this still all exists against a backdrop of “do more”, so the time I may have gained through the help of AI may simply be swallowed up by the next task I need to undertake to continue down the road of continual improvement.   In effect, the net benefit of AI may be quickly nullified by our continued drive for efficiency and maximising output.

Circling back to teaching, this therefore means that generative AI may benefit teachers for a short period, but that eventually, the benefits may simply dissolve in the face of ever-increasing requirements.    But the benefits are so important, that extra time might allow for greater teacher reflection on teaching practice, student learning and student outcomes, it might support greater networking and sharing of ideas plus might support improved well-being for teachers, which I would suggest may result in better teaching, better student outcomes and also better student wellbeing as the students see their teachers modelling good wellbeing practices.   The time AI solutions will provide might support us in spending more time on focussing on what it means to be human and on “human flourishing”.

 Maybe we need to question to “continual improvement” and “efficiency” narratives in that they need to exist in balance and cannot be assumed to be the “right” path.   In relation to continual improvement, I often refer to MVP, minimum viable product and “good enough”.    In relation to efficiency, if I wanted to be more efficient maybe I should stop taking breaks or work through my lunch.    We also need to consider decreasing marginal gains, and maybe that is where we are now, that a lot of the improvements we are bringing about are minor, iterative improvements, but at the cost of cognitive load, time and other resources which may outweigh the resultant benefit.   The extra effort required for each incremental change remains the same, yet the resulting gain is reduced with each change. There is also the challenge of complexity, where more complex processes or systems often bring about greater risk of failure or greater reliance on particular people or tools.And I haven’t even mentioned the speed of change, which the book I am reading refers to in its title, in the “age of accelerations”.   So all of this is happening quicker than ever before which therefore suggests the amount of time we have available to adapt to changes is decreasing.

I don’t have any answers here, so the purpose of this post is not to share a solution, but to pose a question.   I think I know the answer to the question, but not necessarily the answer to the problem it hints towards, but I think the best thing we can do is to start to talk about it and consider it.   So what is the question:

Can we keep adding to the things we need to think about, the processes and the complexity of our lives, or is there a limit?   

AI and general knowledge

I recently was musing on the benefits of general knowledge.   A recent conference I attended involved Prof Miles Berry where he talked about Generative AI as being very well-read.   I had previously seen a figure of around 2000- 2500 years quoted in terms of the time it would take a human to read all of the content included in the training data provided to GPT 3.5, which in my view makes it very well read indeed.   So, I got to wondering if it is this broad base of knowledge which makes generative AI, or at least the large language models so potentially useful for us.

A doctor and AI

Consider, for instance, a medical practitioner. While their expertise lies in diagnosing and treating illnesses, plus their bedside manner and ability to interact with patients and other medical practitioners, their effectiveness as healthcare professionals hinges on a robust understanding of anatomy, physiology, pharmacology, and medical ethics—domains that draw upon general knowledge. Similarly, an engineer relies on principles of mathematics, physics, and material science to design innovative solutions to complex problems.    As a professional, we are required to study and learn from this broad body of knowledge through degree programmes and other qualification or certification requirements.   But we are inherently human which means just because we have learned something at some point, and successfully navigated a qualification or certification route, doesn’t mean that we will remember or be able to access this information at the point of need.    If the medical practitioner therefore uses the AI to assist them initially, they will therefore be drawing on a bigger knowledge base than a human is capable of consuming, plus a knowledge base that doesn’t forget, or fail to remember content at some point learned.     The medical practitioner will still apply their experience and knowledge to the resultant output, bringing their human touch to help address the challenges of generative AI (bias, hallucinations, etc) however the use of generative AI to assist would likely make diagnosis quicker and possibly more accurate.

My changing workflow

The above seems to align with my views in relation to workflows I have changed recently to include generative AI.  Previously I might have known what I wanted to write and therefore get to writing rather than seeking to use generative AI.    Now I realise that, although I know my planned outcome, something which generative AI cannot truly know, no matter how much I adjust and finesse my prompts, generative AI brings to the table a huge amount and breadth of reading I will never be able to achieve.    As such, starting out by asking generative AI is a great place to start.    It will give you an answer to your prompt but will draw upon a far bigger reservoir of knowledge than you can.   You can then refine your prompt based on what you want to achieve, before doing the final edits.    It is this early use of generative AI which I think is the main potential for us all.   If we use generative AI early in our workflows we both get to our endpoint quicker, plus it also opens us up to thoughts and ideas we might never have considered, due to generative AI’s broader general knowledge. I still point my own personal stamp on the content which is produced, making it hopefully unique to my personal style and personality, but AI provides me with assistance.

Challenges and Considerations

Despite its tremendous potential, the integration of generative AI into everyday life and specialized domains poses several challenges and considerations. Chief among these are concerns regarding the reliability and accuracy of AI-generated content, as well as issues related to bias, ethical considerations, and privacy concerns. I do however note here that the issues of reliability, bias, ethics and privacy are not purely AI problems and are actually human and societal issues, so if a human retains the responsibility for checking and final decision-making, then the issue continues to be that of a human rather than AI issue.

Conclusion

Generative AI stands as a transformative force in harnessing and disseminating general knowledge, empowering individuals with instant access to information, facilitating learning and comprehension, and augmenting domain-specific expertise.    It provides a vast repository of knowledge acquired from its training data, which can be used to assist humans and augment their efforts.   I note this piece itself was generated with the help of generative AI, and some of the text and ideas contained herein are ones I may not have arrived at myself, plus I doubt I would have completed this post quite so quickly.    So, if AI is providing a huge knowledge base and assisting us in terms of getting to our endpoint more quickly, plus opening up alternative lines of thinking, isnt this a good thing?   

For education though I suspect the big challenge will be in terms of how much of the resultant work is the students and how much is the generative AI platforms.   I wonder though, if the requirement is to produce a given piece of work, does this matter, and if AI helps us get there quicker, do we simply need to expect more and better in a world of generative AI?

I suspect another challenge, which may be for a future post, is the fact that Generative AI is a statistical inference model and doesnt “know” anything, so is it as well read as I have made out? Can you be well read without understanding? But what does it mean to “know” or “understand” something and could it be that our knowledge is just a statistical inference based on experience? I think, on that rather deep question, I will leave this post here for now.

ISC Digital Conference 2024

I once again was privileged to speak at the ISC digital conference the other week, this time as the vice chair for the ISC digital advisory group as opposed to a member.   It was, as it was last year, a very useful and interesting conference, combined with an iconic location in Bletchley park.   I scribbled many notes from the various sessions and therefore wanted to distil those into a couple of key thoughts below.

Prof Miles Berry was his usual barrel of energy in his presentation, putting forward lots of interesting points for consideration.   Following on for the Oxford Academies Business Managers Group (OABMG) conference I attended the other week, Miles certainly was brave in his presentation, opting to actually do a live demonstration to illustrate the potential power of generative AI in terms of helping towards the challenges related to teacher workload.   I have attended so many conferences which discuss AI but it was so nice to actually see it in practice as Miles took a topic from the audience and worked through the creation of content for students, resources, lesson plans, etc., all in the space of minutes, but also highlighting that a teacher at their best could likely do better, but certainly not quicker.   This clearly highlights the efficiency and workload benefits of generative AI, but also the importance of seeing genAI as an assistant to be paired with our own human strengths.

Neelam Parmar then presented on developing an AI curriculum and there was one question which stuck very much with me.   What is AI?    Now why this stuck with me is both the inconsistency in terms of the use of the term and related terms (machine learning, deep learning etc.) but also in terms of the broader question it might hint to in terms of what is intelligence.    Can we accurately and consistently define what we mean by intelligence?    And if we cannot can we truly be confident in creating an intelligence, an artificially created intelligence or AI?    It’s a bit deep, but maybe this is a question we maybe need to consider, as it also hints towards considering the differences between human and artificial intelligence, and therefore the benefits and drawbacks of each.   I do often wonder how different an AI is to human intelligence in terms of how the human brain really works in processing the huge amount of “data” in the experiences and information we consume throughout our lives.   Is the key difference between AI that of emotion and the physical nature of our intelligence in relation to our physical existence?   

The AQA presentation was next up in terms of ideas which stuck with me, helping me feel a bit more positive in terms of where we are in terms of exam board engagement in relation to the use of AI in assessment and in schools.  I will admit to being disappointed that the Polish and Italian trial has been pushed back further to 2027, which I think is too far away, however, I get that it takes everyone to be onboard to move this forward so there are hoops exam boards must go through.  That said there were definitely positive noises in relation to analytical data on outcomes with school data being pulled in, and resulting info pushed back.   This goes to reducing the administrative burden but also to more effective use of the vast amounts of data schools gather.  It was also good to hear of AQA seeking to share a diagnostic tool for Maths;  Tools like this might just help us to find the best way forward in relation to adaptive, diagnostic and even summative testing.

I once again enjoyed hearing Tom Dore talk about esports and the potential benefits for schools adopting this.   It aligned so well with the earlier presentation which highlighted some of the softer skills which the World Economic Forum have identified as important for the future.  It is so much more than simply gaming, but involves communication, leadership, resilience, problem-solving and so much more, plus it often engages students who may be otherwise less engaged.   It was also so good to hear Amy-Louise Cartwright’s approach in her school and how they, albeit in their early stages of development, have already made progress and have plans for the future.  I loved the esports suite they have created, as although we have been involved here in esports for a while we have been using our normal IT labs, albeit with upgraded PCs capable of supporting the relevant esports games.

Conclusion

The ISC digital conference, like so many other conferences, is about getting schools and school staff together and sharing.   This year’s conference did exactly that, and that let me get my piece in as well which was nice.   It was also nice to be at Bletchley Park and its wonderful auditorium.   Now I will note my train ride to and from the venue was far from straightforward, but the trek was worth it, and I look forward to seeing where we stand in a years time, at the 2025 conference.   Will we have progressed significantly, be asking the same questions, or will the challenges have changed or even been addressed?   Only time will tell.

Is Gen AI Dangerous?

I recently saw a webinar being advertised with “Is GenAI dangerous” as the title.   An attention-grabber headline however I don’t think the question is particularly fair.   Is a hammer dangerous?   In the hands of a criminal, I would say it is, plus also in the hands of an amateur DIY’er it might also be dangerous, to the person wielding it but also to others through the things the amateur might build or install.     Are humans dangerous, or is air dangerous?   Again, with questions quite so broad the answer will almost always be “yes” but qualified with “in certain circumstances or in the hands of certain people”.    This got me wondering about the dangers of generative AI and some hopefully better questions we might seek to ask in relation to generative AI use in schools.

Bias

The danger of bias in generative AI solutions is clearly documented, and I have evidenced it myself in simple demonstrations, however, we have also more recently seen the challenges in relation to where companies might seek to manage bias, where this results in equally unwanted outputs.   Maybe we need to accept bias in AI in much the same way that we accept some level of unconscious bias in human beings.    If this is the case then I think the questions we need to ask ourselves are:

  1. How do we build awareness of bias both in AI and in human decision-making and creation?
  2. How do we seek to address bias?   And in generative AI solutions, I think the key here is simply prompt engineering and avoiding broad or vague prompts, in favour of more specific and detailed prompts.

Inaccuracy

I don’t like the term “hallucinations”, which is the commonly used term where AI solutions return incorrect information, preferring to call it an error or inaccuracy.   And we know that humans are prone to mistakes, so this is yet another similarity between humans and AI solutions.   Again, if we accept that there will also be some errors in AI-based outputs, we find ourselves asking what I feel are better questions, such as:

  1. How do we build awareness of possible errors in AI content
  2. How do we build the necessary critical thinking and problem-solving skills to ensure students and teachers can question and check content being provided by AI solutions?

Plagiarism

The issue of students using AI-generated content and submitting it as their own is often discussed in education circles however I note there are lots of benefits in students using AI solutions, particularly for students who experience language or learning barriers.    I also note a recent survey which suggested lots of students are using generative AI solutions anyway, independent of anything their school may or may not have said.    So again, if we accept that some use of AI will occur and that for some this might represent dishonest practice, but for many it will be using AI to level the playfield, what questions could we ask:

  1. How do we build awareness in students and staff as to what is acceptable and what is not acceptable in using AI solutions?
  2. How do we explore or record how students have used AI in their work so we can assess their approach to problems and their thinking processes?

Over-reliance

There is also the concern that, due to the existence of generative AI solutions, we may start to use them to frequently and become over-reliant on them, weakening our ability to create or do tasks without the aid of generative AI.   For me, this is like the old calculator argument in that we need to be able to do basic maths even though calculators are available everywhere.    I can see the need for some basic fundamental learning but with generative AI being so widely available shouldn’t we seek to maximise the benefits which it provides?  So again, what are the questions we may need to ask:

  1. How do we build awareness of the risk of over-reliance?
  2. How do we ensure we maximise the benefit of AI solutions while retaining the benefits of our own human thinking, human emotion, etc?   It’s about seeking to find a balance.

Conclusion

In considering better questions to ask I think the first question is always one about building awareness so maybe the “is GenAI dangerous” webinar may be useful if it seeks to build relevant awareness as to the risks.  We can’t spot a problem if we are not aware of the potential for such a problem to exist. The challenge though is the questions we ask post-awareness, the questions we ask which try to drive us forward such as how we might deal with bias where we identify it, how we might ensure people are critical and questioning such that they sport errors, how we evidence student thinking and processes in using AI and how we maximise both human and AI benefits.  

In considering generative AI I think there is some irony here in that my view is that we need to ask better questions than “Is GenAI dangerous”.    In seeking to use generative AI and to realise its potential in schools and colleges, prompt engineering, which is basically asking the right questions is key so maybe in seeking to assess the benefits and risks of GenAI we need to start by asking better questions.

OABMG Conference

I was lucky enough to be invited to speak at the Oxfordshire Academies Business Managers Group (OABMG) annual conference earlier in the week where I was speaking on AI in education and the possible impact and implications on school business managers.    It was a lovely event and I really enjoyed Sarah Furness the keynote speaker, however, sadly I had to leave following my session in order to catch a train, one of a number of trains needed to get me to and from the event.

Be brave

Sarah was both insightful and entertaining and to be honest, I could likely write a whole blog post just on the stories she shared however let me just summarise my key takeaways from her presentation.    Her key message, which resonated for me, was the need to be brave, which aligns with the values of my school, and also is so very important where we have technology advancing at such a pace but with regulation lagging so far behind.   We have no choice but to be brave especially given both students and staff are already experimenting with the use of AI.  We need to be brave in engaging, we need to be brave in experimenting and we need to be brave in accepting where things don’t go quite as they planned, but learning from these experiences.   The need for sharing, asking difficult questions and accepting challenges also aligned with my thinking, and again looking to AI in education, if we are to find our way with AI in schools I think this all rings very true indeed.  We need to be sharing our thoughts, and both challenging and accepting challenges from others, if we are to move forward.    Sarah’s talk was about leadership, using her context as a military leader and pilot;  maybe this will be key in the use of AI in schools, the need for effective, brave leaders who value and encourage diversity, sharing and challenge.

AI in education

Going into my presentation my key aim was to discuss AI in education and some possible uses for school business leaders.   I don’t have all of the answers, and to be honest, I don’t feel anyone has all the answers when it comes to AI and education, as AI is advancing at a rapid pace where education has changed little and is under both funding and also workload challenges.   That said, as I shared in my presentation, “The smartest person in the room, is the room”.   This David Weinberger quote is one of my favourites and is often used, as it highlights the need to discuss and share, in doing so we hopefully engage others to think about the issue, in this case, AI in schools, and collectively our thinking, our ideas and experience is enhanced.

Now you can view my presentation slides here if you are interested.   

At the end of my presentation, a couple of questions were raised which I would like to just pick up on, namely school engagement in AI in education, policy and also regulation.  

School Engagement in AI

I would like to draw attention to the article in the Express which highlighted that 54% of the students they surveyed were using AI in relation to their homework.  The key thing here is that students are using AI independently of whether schools have considered or talked about AI.  And it isn’t just students, you will also likely have staff, both teaching and support staff who are using AI.   The AI genie is out of the bottle and attempts to block it will inevitably be futile so, in my opinion, it is key that we engage with the use of AI, we talk with students and staff about AI, and that schools experiment and share.    But the fact AI is already here isn’t the only reason to use it in education.   We talk about the need to support individual students, differentiation, English as a second language and also SEND barriers to learning; all of these can be addressed to some extent through the use of AI tools.   Now I will note here that the use of AI tools may also increase some challenges, such as that of digital divides, but that was a key part of my presentation in talking about the risks and challenges first, as we need to use AI but only from a position of an awareness of risks and challenges.

Policies

Linked to the above, I think it is very important that schools put in place an AI policy if they haven’t already done so.   This allows the school to set out its guardrails in relation to the use of AI in the school.  Now there is a brilliant template for this, as created by Mark Anderson and Laura Knight, which can be found here.   Looking to the future I suspect the AI policy might be eventually absorbed into the IT acceptable use and/or academic integrity policies however for now, while AI use in schools is so new, I think having it as a standalone policy makes sense.

Regulation

There will need to be some form of regulation in relation to AI tools including their use in education however we have already seen that the technology is developing very fast while the regulation is lagging so far behind and is slow to adapt.   As such I think we should hope for and support some form of regulation to protect people, including our staff and students, and their data, but I don’t believe we can wait for this to happen.    AI is already here and students and staff are likely using it.  We can’t stop this, so I think we need to run with it, to try and shape the use and hopefully in doing so shape the regulation which follows.  This will mean making risk v. benefit decisions but seldom do we see anything which is beneficial without any risks.

Conclusion

The OABMG conference was enjoyable even though my visit was brief.   It was good to get to share some thoughts on AI in education and I hope those in attendance found the session useful.   My two key thoughts from the event are, the need to be brave, remembering we learn most from our mistakes, and the need in this ever-busy and complex world to share as collectively we are all better for it. I think these are two things I will try do more actively in future.

Thinking about thinking (with AI)

Artificial intelligence (AI) is definitely the big talking point in educational circles at the moment.  You just need to look at the various conference programs and you will almost always find at least one session touching on AI or generative AI.   Now a lot of the discussion is focused on the possible benefits or the risks associated with AI and less so with the practical applications and need to experiment.   It was in thinking about the practical side of things, looking at tools like ChatGPT, Diffit, Gemini and Bing Image Creator among others, that I got thinking how AI might link to meta cognition.

Learning about learning

The idea of learning about learning, about meta cognition, has been around for quite some time.    The thinking being that if we educate students about how they learn and get them thinking about their learning preferences (eek, I almost said learning styles there!) then they can make informed decisions about their learning, and hopefully be better learners.   It seems to make sense.  But how does this link to AI and generative AI?

Learning with a learning assistant

I think the key issue here is how we see AI in terms of the learning experience.   Is it simply a tool to spark ideas?   Is it a tool to review content?   Is it a tool to surface information?   I would suggest it is all of these things and more, and in the case of generative AI can operate as an assistant to teachers or to students.   It is definitely more than a bit of technology or simply a tool as I suspect in its use its shapes our thinking and our processes, much as the simple tools like the hammer shaped human thinking and processes in the past.    We also need to consider that process when working with generative AI (GenAI) is often iterative or taking the form of a dialogue between the user and the genAI solution.  The user fields an initial prompt, to which the genAI responses.   The user then reviews the response against what they were hoping for, and if they are anything like me they realize that they haven’t been specific enough so therefore now provide further directives to the AI, which in turn returns a new, hopefully better response, and so the dialogue continues until an output which is satisfactory to the user is reached.     Now some of this dialogue can possibly be sped up through the use of various prompt frameworks such as the PREPARE framework shared by Dan Fitzpatrick, however even then it is still likely to be a dialogue with Dan also providing a framework for the review and iterative part of this process, his EDIT framework.

Meta AI supported cognition?

If we are looking to prepare students to work with generative AI as their always available assistant I think we also need to start exploring with students how best to use them.   Part of this is about looking at their learning and how their learning processes might be different with AI.   I suppose it’s a bit like if all your learning was done with a partner, with another human being.  Looking at the nature of the interaction, being very much a dialogue, makes this comparison feel all the more apt.   You would need to consider their approach, their emotions, social interaction, etc.   Now an AI doesn’t have emotions or the social side of things, or at least not yet or as we currently know these to exist, but it does have its own approach, its own biases, its own strengths and its own weaknesses.  So if we are using or encouraging students to use AI in learning, I think we need to work with student to unpick the processes rather than simply focusing on the tools.  If I am looking for ideas and to be creative, how best to I use AI?   If I am looking to review and improve my work, how best am I to use AI?    If I want to use AI for research, how best do I do this?    Is this where Meta AI supported cognition comes in?

Conclusion

In relation to technology use in education I have always said it isn’t about the technology but about what you are seeking to achieve.   With AI it might be using Gen AI to produce better coursework or to give you a starting point or some new ideas.    But if we think beyond the short term goals, isn’t it about being able to better use AI to suit our needs as they arise and as such do we then need to spend time with students unpicking the how of their use of Gen AI, understanding the processes, what works and what doesn’t in order to get better in working with our newly found AI assistant?

Might teaching about Meta AI supported cognition become a thing?