EdExec Live – London

I recently spoke at the EdExec live event, talking about school IT strategy.   I thought I would share some of my somewhat rambling thoughts from the event.  I note one of my opening slides related to Star Trek and what appears to be an ipad-esqe device in captain Piccard’s hands, back in a 1992 episode of The Next Generation.   Now Star Trek TNG is set in the 24th century, yet the iPad made its appearance in 2010, in the 21st century.  This shows how poor we are at predicting the future, however also hints to the pace of technological change.

Tech is here and here to stay

We just need to look at our lives today and we can see that technology is a key part of it.  On my way to London for the EdExec event I used digital train tickets, I listened to music via spotify, worked on some blogs using my MS Surface while also engaging in social media discussion.   I also used Google Maps to help me navigate my way to the event venue.   Technology is now an essential part of our everyday lives.   And looking at schools it is no different.  When I qualified as a teacher, back in the late 90s (and that does make me feel old!) you put your lesson content on a roller blackboard or acetates for display via an OHP.   You recorded student attendance manually in a register.   Now, all of these things involve technology, recording attendance on your schools Management Information System (MIS), putting digital content on your digital panel, smartboard or projector.  You also use digital tools for safeguarding, for communication and for much more.    All of our schools are digital, to some extent, already.

Strategy

And if schools are digital there should be some sort of plan to manage the training needs of staff, sustainability into the future, renewal and updates, etc.     Although the technology is already here, we need to ensure we have a plan to make this situation sustainable into the future.    Beyond the basics, if you are looking to significant innovation, such as rolling out a learning platform or 1:1 devices for the first time, we need a detailed strategy and plan to ensure we get all the basics in place, such as infrastructure, training and support.   After this, once technology is largely embedded and mature, such as at Millfield where 1:1 devices have been in place 2012, office 365 has been phased in since 2019, and Teams/OneNote from 2022, there isnt the same need for a distinct technology plan and technology now takes its lead from the broader school vision and strategy.  So the need for a distinct technology strategy varies with the technology maturity in the school.   I also note as you go down the iPad route, over chromebooks or windows laptops, or Office 365 rather than Google Workspace for Education, etc, and as these become embedded, it becomes increasingly difficult to change path.

A key issue in all the technology decision making is that it is not about the technology, the shiny new Chromebooks or Google Classroom, but about the Why and what you hope to achieve.   Is it about improving access for students with SEND, or about students with EAL?   Is it about supporting the development of soft skills such as creativity, communication, collaboration and problem solving?  Why are you seeking to use technology and what do you how to achieve?    Once you have this you can then look at which technology or technologies are the best fit for your requirements.

Balance

I also highlighted the importance of balance during my session.   Everything we do, which we do for good reasons, will have a negative implication.   We ban phones and students will still use them, plus we lose an opportunity to teach students about appropriate use of their devices.       We buy 1:1 devices and we increase the safeguarding risks as students now have their own personal devices, while also possibly having a wellbeing impact due to increasing screentime.   There is a constant balance and very few, if any, binary situations where something is purely good or bad;   The reality is that technology tends to be good and bad.   The key therefore is the need to consider the options and the good vs. bad continuum and then to work out what works for your school and where on the continuum you will sit, your risk appetite.

Some of the future

I also spent a little time looking towards the future, but acknowledging that we are poor at predicting the future, so I had opted for some future advancements, which are almost here, or here but not fully implemented at this time.     Now this clearly had to include mention of Generative AI (GenAI) and how education and schools need to look to adapt to this new technology, which both students and staff are already using.    If GenAI gives all students the ability to create coursework, homework and other content, but with a broader vocabulary, independent of their primary language, independent of any special educational needs or disabilities and of their creative thinking, isn’t this a good thing?   But if this is the case, how do we continue to grade student work and award them their GCSEs and A-Levels, or maybe we no longer need to rank and order students in the same way we used to?    There is the potential for such a broad shift in education resulting from GenAI, but I also am concerned that there is also potential to expand the digital divides which already exist.

Linked to the above is hopefully that shift towards digital exams rather than sitting students in an exam hall once year with paper and pen.   And I am not talking about the “paper under glass” exams which are planned for the coming years, where the paper exam is just made into an identical digital exam.   I am thinking more about adaptive testing, allowing students to take exams as and when they are ready, allowing schools to manage 100’s of students through a Maths exam for example, but where they don’t have that number of devices and therefore have to put students through in batches.   It may even be that students don’t even sit these exams in the school but can actually engage in them anywhere and anytime.

And in the way of balance, with GenAI, and with a shift towards digital exams, and with more digital time generally, we need to consider the risks related to addictive social media content, data protection of increasing volumes of data being shared, particularly where the data relates to young people, the risks associated with fake news, and with influence and manipulation of people via social media and other platforms.   

A solution?

I finished my session with my favourite quote, which I have been using for years, the quote from David Weinberger, “the smartest person in the room, is the room”.    In a world where technology is moving so fast, and where education has a tendency to move much slower, our best change to maximise the positive impact of technology, while minimising and controlling the negatives, is to focus on the power of the collective.   Working collectively, sharing ideas, what works, but also what doesn’t, will allow us all to be better than any of us can be individually.    Our biggest strength is in networks, in collaborating and in sharing.    The bigger the room, the smarter we all are.

Is Gen AI Dangerous?

I recently saw a webinar being advertised with “Is GenAI dangerous” as the title.   An attention-grabber headline however I don’t think the question is particularly fair.   Is a hammer dangerous?   In the hands of a criminal, I would say it is, plus also in the hands of an amateur DIY’er it might also be dangerous, to the person wielding it but also to others through the things the amateur might build or install.     Are humans dangerous, or is air dangerous?   Again, with questions quite so broad the answer will almost always be “yes” but qualified with “in certain circumstances or in the hands of certain people”.    This got me wondering about the dangers of generative AI and some hopefully better questions we might seek to ask in relation to generative AI use in schools.

Bias

The danger of bias in generative AI solutions is clearly documented, and I have evidenced it myself in simple demonstrations, however, we have also more recently seen the challenges in relation to where companies might seek to manage bias, where this results in equally unwanted outputs.   Maybe we need to accept bias in AI in much the same way that we accept some level of unconscious bias in human beings.    If this is the case then I think the questions we need to ask ourselves are:

  1. How do we build awareness of bias both in AI and in human decision-making and creation?
  2. How do we seek to address bias?   And in generative AI solutions, I think the key here is simply prompt engineering and avoiding broad or vague prompts, in favour of more specific and detailed prompts.

Inaccuracy

I don’t like the term “hallucinations”, which is the commonly used term where AI solutions return incorrect information, preferring to call it an error or inaccuracy.   And we know that humans are prone to mistakes, so this is yet another similarity between humans and AI solutions.   Again, if we accept that there will also be some errors in AI-based outputs, we find ourselves asking what I feel are better questions, such as:

  1. How do we build awareness of possible errors in AI content
  2. How do we build the necessary critical thinking and problem-solving skills to ensure students and teachers can question and check content being provided by AI solutions?

Plagiarism

The issue of students using AI-generated content and submitting it as their own is often discussed in education circles however I note there are lots of benefits in students using AI solutions, particularly for students who experience language or learning barriers.    I also note a recent survey which suggested lots of students are using generative AI solutions anyway, independent of anything their school may or may not have said.    So again, if we accept that some use of AI will occur and that for some this might represent dishonest practice, but for many it will be using AI to level the playfield, what questions could we ask:

  1. How do we build awareness in students and staff as to what is acceptable and what is not acceptable in using AI solutions?
  2. How do we explore or record how students have used AI in their work so we can assess their approach to problems and their thinking processes?

Over-reliance

There is also the concern that, due to the existence of generative AI solutions, we may start to use them to frequently and become over-reliant on them, weakening our ability to create or do tasks without the aid of generative AI.   For me, this is like the old calculator argument in that we need to be able to do basic maths even though calculators are available everywhere.    I can see the need for some basic fundamental learning but with generative AI being so widely available shouldn’t we seek to maximise the benefits which it provides?  So again, what are the questions we may need to ask:

  1. How do we build awareness of the risk of over-reliance?
  2. How do we ensure we maximise the benefit of AI solutions while retaining the benefits of our own human thinking, human emotion, etc?   It’s about seeking to find a balance.

Conclusion

In considering better questions to ask I think the first question is always one about building awareness so maybe the “is GenAI dangerous” webinar may be useful if it seeks to build relevant awareness as to the risks.  We can’t spot a problem if we are not aware of the potential for such a problem to exist. The challenge though is the questions we ask post-awareness, the questions we ask which try to drive us forward such as how we might deal with bias where we identify it, how we might ensure people are critical and questioning such that they sport errors, how we evidence student thinking and processes in using AI and how we maximise both human and AI benefits.  

In considering generative AI I think there is some irony here in that my view is that we need to ask better questions than “Is GenAI dangerous”.    In seeking to use generative AI and to realise its potential in schools and colleges, prompt engineering, which is basically asking the right questions is key so maybe in seeking to assess the benefits and risks of GenAI we need to start by asking better questions.

Thinking about thinking (with AI)

Artificial intelligence (AI) is definitely the big talking point in educational circles at the moment.  You just need to look at the various conference programs and you will almost always find at least one session touching on AI or generative AI.   Now a lot of the discussion is focused on the possible benefits or the risks associated with AI and less so with the practical applications and need to experiment.   It was in thinking about the practical side of things, looking at tools like ChatGPT, Diffit, Gemini and Bing Image Creator among others, that I got thinking how AI might link to meta cognition.

Learning about learning

The idea of learning about learning, about meta cognition, has been around for quite some time.    The thinking being that if we educate students about how they learn and get them thinking about their learning preferences (eek, I almost said learning styles there!) then they can make informed decisions about their learning, and hopefully be better learners.   It seems to make sense.  But how does this link to AI and generative AI?

Learning with a learning assistant

I think the key issue here is how we see AI in terms of the learning experience.   Is it simply a tool to spark ideas?   Is it a tool to review content?   Is it a tool to surface information?   I would suggest it is all of these things and more, and in the case of generative AI can operate as an assistant to teachers or to students.   It is definitely more than a bit of technology or simply a tool as I suspect in its use its shapes our thinking and our processes, much as the simple tools like the hammer shaped human thinking and processes in the past.    We also need to consider that process when working with generative AI (GenAI) is often iterative or taking the form of a dialogue between the user and the genAI solution.  The user fields an initial prompt, to which the genAI responses.   The user then reviews the response against what they were hoping for, and if they are anything like me they realize that they haven’t been specific enough so therefore now provide further directives to the AI, which in turn returns a new, hopefully better response, and so the dialogue continues until an output which is satisfactory to the user is reached.     Now some of this dialogue can possibly be sped up through the use of various prompt frameworks such as the PREPARE framework shared by Dan Fitzpatrick, however even then it is still likely to be a dialogue with Dan also providing a framework for the review and iterative part of this process, his EDIT framework.

Meta AI supported cognition?

If we are looking to prepare students to work with generative AI as their always available assistant I think we also need to start exploring with students how best to use them.   Part of this is about looking at their learning and how their learning processes might be different with AI.   I suppose it’s a bit like if all your learning was done with a partner, with another human being.  Looking at the nature of the interaction, being very much a dialogue, makes this comparison feel all the more apt.   You would need to consider their approach, their emotions, social interaction, etc.   Now an AI doesn’t have emotions or the social side of things, or at least not yet or as we currently know these to exist, but it does have its own approach, its own biases, its own strengths and its own weaknesses.  So if we are using or encouraging students to use AI in learning, I think we need to work with student to unpick the processes rather than simply focusing on the tools.  If I am looking for ideas and to be creative, how best to I use AI?   If I am looking to review and improve my work, how best am I to use AI?    If I want to use AI for research, how best do I do this?    Is this where Meta AI supported cognition comes in?

Conclusion

In relation to technology use in education I have always said it isn’t about the technology but about what you are seeking to achieve.   With AI it might be using Gen AI to produce better coursework or to give you a starting point or some new ideas.    But if we think beyond the short term goals, isn’t it about being able to better use AI to suit our needs as they arise and as such do we then need to spend time with students unpicking the how of their use of Gen AI, understanding the processes, what works and what doesn’t in order to get better in working with our newly found AI assistant?

Might teaching about Meta AI supported cognition become a thing?

A compliance approach to AI

I was browsing the internet looking at recent news and I spotted the below at the bottom of a particular article:

This got me thinking, is this the way of things, that we will start seeing notes at the bottom of articles, blog posts, etc, stating that “this was crafted with the help of generative AI tools”.   It feels ok from a transparency point of view, in that the organisation in question is being transparent as to how the article was created but could this simply be to absolve them from any issues arising from bias or inaccuracies resulting from the use of an AI solution?    Also, what about those less scrupulous organisations;  will they bother to let us know about the use of a Generative AI (GenAI) solution or will they simply post articles quickly and easily without any due care and attention?

Taking this and considering the implications for education, what if students took the same approach and simply put in their referencing that their coursework, thesis, dissertation or other work was “written with the help of generative AI”.    Would this be acceptable?     I feel this is all falling into the trap of compliance;   The author of an article or the student, ahead of submitting their work, simply puts the statement in place so they can tick a box and say they are compliant and transparent when in fact they have told the reader or marker very little.    How much “help” did the GenAI solution provide?   Did it provide the basic outline to start with or did it write the whole thing, aside from a couple of minor sentence changes?   The extent of the “help” matters greatly!  Or does it?

I suppose the key question here is why do we need to know if GenAI was involved in the creation of a piece of content?    Is it due to the fact it may contain bias and inaccuracies?    I suspect not as I would expect a journalist or editor to take responsibility and check any GenAI content before it is published.  The same goes for a student, I would expect they have thoroughly checked the work before handing it in; it is their responsibility not that of GenAI.    Is the reason we need to know due to an uncomfortable feeling in relation to AI created content?   Consider reading two pieces of text providing a summary of a sporting event;   If you were told one was written by a human and the other by a GenAI solution, would you have a preference and where does this preference, which I suspect would be towards reading the human written work, come from?   Is the reason that we need to know that the work is the work of the student or author so we can direct or praise or complaints?   But do we acknowledge the word processing software used, the web browser used for carrying our research, the laptop the content is typed on?     Is AI a tool in the creation of the content or is it more than just a tool? If the piece of work produced with the help of GenAI, be this help little or significant, is a good piece of work does it matter? We used to focus on mental arithmetic, considering the use of a calculator to be cheating, yet now a calculator is just a tool we can use to help with maths; how is the use of GenAI any different?

I worry that the newspaper that placed this little rider at the bottom of their article is approaching the use of GenAi far too superficially without considering the wider impact.   There are many unanswered questions in relation to GenAI with a small number of them presented above.  

Or maybe I just need to accept that at least they have made an effort and a start as to how we become more transparent in the increasing use of GenAI in the creation of online content?

References:

Woman wins £2million house in competition but only receives £5,000 due to small print (msn.com)