
It was around a year ago that I had the opportunity to speak at a Keynote event alongside Laura Knight, Dr Miles Berry and Rachel Evans, my fellow ISC Digital Advisory Group colleagues, so it was with some anticipation that I looked forward to involvement in another Keynote event, again including Laura and Rachel, but also including my fried Bukky Yusuf as well as Dina Foster and Dale Bassett. As with 2023, the event focused on AI in education, and included an opportunity for me to speak on AI literacy for students as well as on the potential for AI to help with efficiency and workload.
So, the opening speaker was Bukky delivering an introduction looking at what AI actually is and to some of the terminology and language which surrounds AI. She highlighted that AI isn’t new and is something which was being discussed all the way back in the 1950’s plus that, even before ChatGPT burst onto the scene in late 2022, AI was already something we were using in our daily lives in the likes of google maps. It was interesting as she discussed narrow AI, which is where I think we are now, but also Artificial General Intelligence (AGI) which some predict will be achieved by 2040, and Artificial Super Intelligence (ASI) which is the advancement and the scary situation that would proceed AGI. If AI achieved AGI, the issue is that it can iterate and evolve far quicker than we can as humans, so once AGI is reached its self-advancement quickly moves beyond human capacity and understanding towards ASI. We potentially become what ants are to human beings. Now I hold to the hope here that we are pretty poor at predicting the future, that this is still a couple of decades away and that we will hopefully put some guard rails and mitigation measures in place to ensure we are prepared for this between now and then.

Next up was Laura, who as always, delivered a thought provoking session which stimulated such broad thought in relation to AI and education. I loved her discussion of technology strategy metaphors and the dangers of a hot air balloon, fireworks or jet fighter approach, each with its advantages and its drawbacks. I sense I try to balance the hot air balloon and the jet fighter, seeking to have an overview but while also trying to keep a sense of momentum and direction. I think I am passed my days of seeking the shiny new thing, the fireworks, although I will note that I certainly did fall into this trap in my early teaching and EdTech days. Laura also touched on the need to be creative and yet also be an engineer which I think is an interesting challenge as it requires two different types of thinking.
My first session of the day related to developing AI literacy within students, but in fact much of what I said was equally applicable to staff as well as students. I outlined some of the knowledge which I feel is important, including knowing of the benefits but also the risks and challenges as they relate to the use of AI. Next I moved onto the skills side of things, and how all the discussion of prompt engineering and the likes paints use of AI out as being complex and technical, when in fact my recent use of CoPilot involved me simply talking to my laptop and CoPilot. The barrier to entry, to actually having a play with AI is actually so very low than anyone can do it.
In terms of skills I highlighted the need for students, and staff, to be able to think critically and to review and asses content presented to them to identify what is fake or real. Given the speed with which posts on social media become viral, and the potential for AI to be used to create or manipulate content, whether it is text, image, audio or video, the need for critical thinking is never more key. I also pointed to the need to consider the ethics in relation to AI tools, using Star Wars and the post death use of James Earl Jones’ voice and Peter Cushings likeness. Is this ethical? How do we seek consent or permission? Are there risks of mis-use? Data literacy was my next focus, in the fact AI relies on data and therefore we need to get better at understanding what data is gathered, how it is used, how data might be inferred and more. One of the attendees also raised the issue of the environment, and on reflection, I should have included a slide to this, to the need to consider the environmental impact of the user of genAI.
After lunch the next session was another Laura session this time looking at the safeguarding implications of AI. This session went into some of the murkier implications of AI including the use of AI imagery and maybe even chatbots to support criminals engaged in sextortion. She talked about the shame that people feel when they get caught up in technology enabled safeguarding incidents, such as sextortion, and I think the emotional side of things is very important to remember and to consider. She also raised the issue of some students possibly withdrawing and relying on AI as their friend and confidant, and the implications of this from a privacy point of view as well as from a safeguarding risk point of view where an AI could guide a child towards inappropriate or even harmful behaviour. The challenge of privacy was also covered, acknowledging that we humans are pretty poor at this often agreeing to app terms and conditions without any consideration for what we have actually agreed to, a challenge that is becoming more and more difficult in my view as we share more information with more apps and services.
My final session of the day focussed on AI and efficiency and also on the possibility it can help to address the current workload challenges in education. Now Bukky bigged this session up as the “unicorn” session so my first step pre starting the session was to use genAI to get a nice photo of a dog with a unicorn horn on its head; I simply don’t think anyone has the answers here, or the unicorn, it is just a case of prompting discussion and sharing ideas. My session was very much about getting attendees to collaborate and share their own idea and experiences. I have long said the smartest person in the room is the room and this session focussed on exactly that and on getting the audience themselves to share their thoughts and ideas, before I then went on to share some of mine. One of the highlights for the event as a whole was an attendee picking up on my comment regarding the need to build networks and communities, suggesting that the attendees were themselves now a network and therefore it would be worth seeking to find a way to continue discussion beyond the event; I highly hope this is something we can get off the ground as I truly believe our best chance to realise the potential of AI, or maybe just to survive the fast paced technical change, is to work together and to actively share and discuss issues or ideas.

The event then closed with a panel session involved myself, Laura, Rachel and Dale. And before you wonder about if I suffered my usual travel woes, lets just say I stupidly decided to climb the stairs at Russel Square tube station, clearly missing the warning sign. Approx. 170 spiral stair case steps later and I almost never made the conference the following day!
It was a long but very useful day with lots of things to go away and think on. I also made use of Otter to record my own presentation with a hope to use this to improve my preparation and my delivery for future events. I am also hopefully that maybe the attendees will indeed engage with sharing and discussion beyond the event itself, as this is the most likely method in ensuring the discussions and sessions shared bring about the positive change myself and the other presenters would love to see.