AI and the digital divides

The digital divides are something I have been discussing for a while.   They generally aren’t anything new albeit I always use the plural rather than singular divide.  This is due to my believe that it isn’t a simple single divide but multiple inter-related divides including access to hardware, high speed internet, support, and more.    And in the discussion of AI I have been worried about it adding another divide, but speaking recently at an Edexec live event got me thinking a bit broader.

AI closing divides.

Maybe AI might close divides rather than open them.    If we consider teaching staff, maybe AI in the hands of teachers will result in teachers generally being able to be more creative and engaging with lesson content.    So rather than some students benefitting from creative teachers, being artistic, musically creative, etc, with the skills to turn this into lesson content, AI will put these capabilities into more teachers hands.   You can create something artistic without necessarily being artistic yourself, as long as you have the ideas and can outline to Generative AI.    I think back to teaching during an OFSTED inspection many years ago and I did a lesson on relative vs. absolute cell referencing in Excel using the game of battleships to get the concept across.   I had the skills to make this engaging with video content and more, but I would suggest at that time, some maybe 20 years ago, I would have been in the minority.  Fast forward to today and video and image content can easily be created using AI, putting the potential to create interesting, engaging content in the hands of more teachers than ever before.

We also need to look at student work, such as coursework.   Those students who struggle to get started, or need support finessing and checking their work suddenly have AI tools available to help.   Those students, taught in English but where it is their second or maybe third language now have tools to translate content.   Students with SEND also have AI tools which can help, and this help basically amounts to reducing or even removing the divides which previously existed.   In one discussion after my session at the Edexec event we were discussing coursework and marking with the suggestion that the gap between the best and the worst work will be narrowed through AI.   This may lead to a need to refine marking boundaries, to refine expectations or even to refine the assessment methodologies as a whole, but whichever way you look at it, it is a reduction in some divides.

AI growing the divide

The likely big issue is one of socioeconomic divide and access to AI tools and the required devices, infrastructure and support.   This will be uneven.    But I wonder if it is for schools to solve socioeconomic issues which stretch way beyond schools, into access to health support, opportunities beyond schools, positive family cultures and more.    We do want to seek to address this but am not sure schools have it within their power.

What schools do have in their power is to address the divide which may grow between those students at schools engaging with AI and those schools seeking to try and prohibit and ban AI use.   If we simply accept AI is here, has been for a while, and that we are all using it, and especially that students are using it, then maybe a ban doesn’t make sense.   Maybe we then find ourselves seeking to work with and teach students about AI and abouts its ethical and safe use.

Elephant in the room

And as to the “cheating” narrative, is a pen and paper cheating over having to explain a concept in person?   I would suggest for an introvert a debate or discussion on a concept would put them at a disadvantage, however providing pen and paper shapes thinking and the output.  It encourages slower linear thinking and a type of structure not quite as present in a discussion or debate.    Taking this idea further, what about the students using a laptop or computer as part of their exam concessions; Is this cheating?   Isnt it just about reducing the divide between them and other students?  So why is AI use cheating if it reduces divides?    Maybe we need to start asking students about why and how they used AI, what the benefits were, etc.    And definitely, lets not ask them to reference AI tools as I don’t see the point in this;  They don’t reference which search engine they used, yet this shaped the resources presented to them.    AI is a tool, it is here, so lets getting students using it, but teaching them about its use and getting them to use it safely and ethically.    Yes, some students may try to use it to cheat, but lets treat them as the exception rather than the rule, and develop plans for how we deal with this.   If we don’t believe the work is the students, that it represents what they have learned, then lets just ask them to present or to explain it.

Conclusion

AI is a tool, it is here, and it has the potential to narrow some divides, as well as the potential to widen others.    I doubt there will be a perfect solution so we are going to need to navigate our way through, considering benefit and risk and making the best reasonable decisions possible.    If we can narrow the key divides, where schools have the ability to address such divides, where avoiding widening divides, then this is likely the best we can achieve.      Maybe this will require us to think carefully about the scope of education and schools and what they can reasonably be expected to impact on and start there.

AI: Time to give up pen and paper?

I have been reading the Experience Machine by Andy Clark off and on for quite a few months however the other day, on a trip down to London for an InfoSec event I found myself on a train to London, where once again I had an opportunity to do some reading.    It wasn’t long before I was reading Clark’s thoughts as to the extended mind and it got me thinking of the current discussion in relation to AI use in schools, and in particular by students for “cheating”.

Clark talks about how humans have sought to extend their capabilities through the use of tools, including both basic tools like the pencil as well as technological tools like devices and apps.   He makes the point that rather than just being a tool which is used, that the use of tools results in fundamental changes to our thinking processes, to our minds.   We have developed a species through our ability to use tools and to adjust our thinking processes around these tools, in order to do something more than we could prior to the use of the tools.

Taking this into the world of education, I have repeatedly talked about the JCQ guidance in relation to Non-Examined Assessments (NEAs) where it talks about making sure the work is the students own work.    Well, if we take Clarks comments, then the output produced by a student using the tools of a pen and paper, was shaped not just by the students but by the pen and paper they used.   The pen and paper shaped thinking processes, ordering and more, influencing what the student produced.  Maybe the sheet of paper will influence how much the student produces?   Maybe the difficult in erasing content written in pen will influence the students decision making as to whether to change or remove sections they have written.   So is it still the students own work?

Considering a different tool, this time a laptop in the hands of a student with exam concessions which facilitate their typing rather writing.   Again, I would agree with Clark that the tool, rather than being just a tool, changes the thinking processes.   With a laptop a student can more easily shift and reform their thoughts and ideas, moving paragraphs around and erasing or adding content as needed.  This means processes related to the ordering of content which might be needed when using pen and paper are no longer as important.   A student with a laptop might be more willing to take risks and explore their writing knowing they can easily change, add or edit, whereas a student with pen and paper may be a little more risk adverse, and therefore more creatively limited.

So now let me take a leap, and I suspect some will see it as a leap too far.    What if the tool rather than just pen and paper, is actually a generative AI solution.    The interactions with the AI, assuming the student has been taught to use AI and has developed the appropriate skills, will shape the students thinking processes.   Maybe the broad training data of the AI will result in the student considering aspects of the topic they may not have otherwise explored.   Maybe their language will change, making greater use of more academic language as a result of the academic content which makes up the AIs training data.  Maybe their language will be a bit more flowery and expressive than they might write without an AI tool.   As with the laptop, AI may make the student even more creative and less risk adverse, knowing they can easily edit but that they can easily get feedback and make iterative improvements.    Is this any less the students own work?

I need to be clear here, that I am not suggesting we just jump on the AI bandwagon without thinking.   We definitely need to consider the risks and challenges and to seek to find a path towards the ethical, responsible and safe use of AI schools.    But, we also need to acknowledge we now use many tools which we would not give up.   We would not give up the pen and pencil, the calculator, email and much more, and each of these is more than just a tool for use.    As we have become accustomed to use them they have changed how we as humans think and operate.   These tools have changed how our minds operate.    AI will do the same, and we need to think about it, but if our reason for not using AI is that it will change us, it is cheating or produces things which are not our own work or truly not representative of the real me, then does this mean that we need to give up all other tools including pen and paper and the written word?

AI and Coursework

Coursework continues to be a significant part of qualifications, whether this be GCSEs, A-Levels, or vocational qualifications like the BTec qualifications.   In BTecs coursework is the main assessment methodology, where this hasn’t changed that much since I actually had a hand in writing some BTec units and acting as a standards verifier.   The world around these qualifications, though, has changed particularly with the availability of Generative AI, so how do schools manage the use of AI by students, the requirements of examining bodies and the ethical need to ensure fairness in marking and assessment?

Firstly lets just accept students are using AI.   This is a statement which I myself have made and that I have heard others make.   The challenge is that we are often referring to ChatGPT, Gemini, Claude and the likes, and to things post November 2022.   The reality is that students were using AI prior to that.   They were using spellchecker, they were using grammar checkers and they were using google for searches.   Each of these involves AI.   AI isn’t new so let dispense with the concern regarding students using AI to cheat.

A students “own” work

So, when looking at coursework or NEAs (Non-examination assessments) JCQ states that work “an individual candidate submits for assessment is their own”.    At face value this makes sense but what constitutes the students “own” work.   This blog piece for example has seen AI highlight spelling errors which I have since corrected, plus I have had suggested alternative sentence structures and grammatical changes recommended, with AI behind these recommendations.    With these changes is it still my own work?     And in this case I am writing this directly from my thoughts rather than with a structure however if I asked AI for some help on the structure of the blog piece before writing would it still be mine?    Having completed I posted this on my site, but I could have fed it into AI for feedback and suggested improvements;  would the resultant blog post still be mine?   And how is this use of GenAI different from using spell check and grammar checker and the editor built into word?    In all cases it results in a piece of work which wasn’t what I originally typed, but is likely better.

Referencing: Why bother?

JCQ mentions that candidates must  not “use the internet without acknowledgment or attribution”.   Again, on face value this seems fair, but what about spellchecker and grammar checker.    In all my years I have never seen anyone reference Microsoft or Googles spelling and grammar checkers yet I am pretty sure they have almost always been used.    So why might Grammarly or ChatGPT or even the Editor in MS Word be different?     

And if we accept that students are using spellchecker, grammar checker and almost certainly using generative AI tools, surely they just end up noting they are using them which doesn’t seem to help from an assessors point of view.   With a traditional reference to a book an assessor could at least go and look it up, but when a student uses generative AI exactly how do I cross reference this.   And if I cant what is the value in the reference, and especially so if almost every student basically states they made some use of AI including generative AI.

Coursework: A proxy for learning

The challenge here is that we are using coursework as a proxy for testing a students learning, their knowledge and understanding.   It used to be that a piece of coursework was a good way to do this, then we got Google.   We now needed to check for unusual language, etc and then use Google itself to try and prove where students had plagiarised.  And more recently we have generative AI and things are a bit more difficult still.    We can no longer use Google to check the document for plagiarism and don’t get me started on AI detection solutions, as they simply don’t work.  

Maybe therefore we need to go back to basics and if in doubt speak to the student.   If we are unsure of the proxy, of coursework, then we need to find another way to cross check or to assess.   This could be a viva, asking the students to explain what they meant within sections of their coursework, or asking them to provide examples, or we could ask them to present rather than write their coursework.  In each case we get to assess the students confidence, body language, fluency, etc, in relation to the topic being assessed, rather than just what they have written down.   So maybe rather than seeking to block or detect AI use, we need to accept that we need to find new ways to assess.

A way forward?

A key starting point, in my view, with students in that of education.    Students need to know what AI is and understand what is acceptable in terms of AI use.    They need to understand the difference between using AI tools as an aid, such as spellchecker, grammar checkers and even generative AI, versus using it to do the work.    It might be fair to get help with my work in eliminating spelling errors.   It might also be fair to help me in better structuring my thoughts or my written words.    But it isn’t fair if the AI writes the piece of work for me and I just present it as my own but where there is no real effort on my part, no real sense of my views, in what is produced.   I suppose it’s a bit like discussing the work with a friend;   If we discuss the work which leads to a better result produced by me then this is good, but if my friend does the work for me then this isn’t.    But things are a little more nuanced than that sadly, so how much help is acceptable?

The challenge with the above is that some students will use AI correctly and some will, for various reasons, use it incorrectly or even dishonestly.   How will we tell?    I suspect some of this is down to professional judgement and knowing our students and some is audit tracking tools such as version history.    That said I think the easiest way for us to tell is to get to the root of learning and ask the students to explain that which they have submitted or at least part of it.    If it’s a good piece of work and they can explain it, then clearly they have learned the content and the work is representative.     If it’s a good piece of work and they cant explain it then it isn’t and therefore they shouldn’t get credit.

AI: A shiny new thing or more?

Artificial Intelligence (AI) has been heralded as a revolutionary tool with the potential to transform numerous industries, including education. However, maybe this has meant we have taken our eyes of some of the basics.   Maybe before spending so much time on AI, before delving into the exciting possibilities AI presents, it is crucial to consider the foundational IT infrastructure which needs to first be present in schools. Without a robust and reliable technological base, the integration of AI in educational settings is likely to face significant hurdles no matter how much discussion occurs at next weeks BETT conference.    That said, I myself need to admit to being very positive about the potential impact of AI for teachers, for students and for schools.

Assessing the Basic IT Infrastructure in Schools

The successful implementation of AI in education hinges on the availability of essential IT infrastructure. This includes high-speed internet access, up-to-date hardware and software, and adequate technical support. This is variable across schools, with some school continuing to struggle with outdated equipment, insufficient bandwidth or limited technical IT support, with this all likely to hamper the effectiveness of any AI usage.     Some schools have 1:1 programmes which puts digital technologies in students hands in every lesson, which therefore may mean students now have access to AI across the curriculum, however in other schools technology access may be limited to one or two visits to an IT lab each week.

Investing in the necessary infrastructure is paramount. Schools must ensure that they have the capacity to support AI tools, including reliable hardware and infrastructure, plus sufficient internet bandwidth. Without these prerequisites, the benefits of AI cannot be fully realised.    And I suspect one of the main potential benefits lies in putting AI in the hands of the students themselves, which therefore requires student access to devices possibly on a 1:1 basis.

AI: The New Shiny Thing?

The introduction of AI in education gets me thinking about the hoo-ha which has accompanied some previous technology innovations.    I remember the pronouncements as to how the interactive whiteboard, the virtual learning environment and the MOOC (Massively Open Online Course) would be transformative and lead to the reimagining of the modern education system.    In each case there was some impact, but the espoused potential was never realised and in fact the impact was mixed, especially when considering for change, resource and financial costs versus the resultant impact on students and their learning.    In each of these cases the new technology was a shiny new thing for some educators to get excited about however the long-term impact was never there.    Now personally I think AI is different, assuming schools first consider the basics such as access to infrastructure, bandwidth, support and training.     But if these basics aren’t considered, or aren’t sufficiently actioned, then AI becomes yet another shiny new thing where it will promise so much, but through the lack of the basic and fundamental infrastructure, will deliver little.

And it needs to be noted, that even if the basics are in place, we still need to approach AI with a critical eye. Educators and policymakers must evaluate whether AI tools genuinely enhance learning outcomes and address specific educational challenges. It is not enough for AI to be novel and exciting; it must be demonstrably effective and aligned with pedagogical goals.   It must achieve that difficult to quantify concept of “impact”.   I feel it can do this, but only if we are careful in our choice of tools and how we seek to use these tools.

Critical Thinking

Another important foundational aspect to AI use in schools is that of critical thinking.   Generative AI can quickly answer questions, provide an outline for coursework or offer feedback however how do we know that the content it returns is correct or suitable?    This requires the essential skill of critical thinking.    Now educators have long recognised the importance of critical thinking and have sought various methods to cultivate it in students however again the implementation across schools is varied.     Some schools include critical thinking in their values, some signpost it and have built opportunities across the curriculum however for others it may be paid but lip service.   To introduce students to the use of AI, or to use AI as educators, without the necessary critical eye on the output content will likely only lead to problems.   

Conclusion

I am eager to contribute to, and be involved in exploring and experimenting with AI in schools.   There is such great potential in the use of AI and I myself have already seen some of this potential in practical terms.    That said, AI has become “the” topic in education circles as of late but maybe for some schools this detracts from their need to focus on the fabric of the school, the IT infrastructure, IT support and digital citizenship development of students.

AI requires us to be more critical as we seek to use AI tools and as we consume online content which may have been produced using AI.    As such, maybe we also need to be more critical of our focus on AI in education, considering what other aspects of schools and school life this focus may distract us from.

Running and AI

I was running the other day, trying to ensure I hit my 500km target for 2024, and I got to thinking about AI, seeing some interesting parallels in relation to my experiences as I lumbered around one of my usual 5km routes.

The key issue that sparked my thinking was peoples reaction to me saying “good morning” as I made my way around my run.   People seemed to be very surprised and uncomfortable with my polite announcement and this is despite some of the people I passed being people I passed on my run pretty regularly and therefore where my greeting should have been familiar.    Now I have to acknowledge I am a 6ft 2in Scotsman, and in my running would likely appear to others as hot, breathless and sweaty, so this may play into their reaction in that they may be seeking to just stick their heads down and ignore me.   But what if it is more than that?

It all got me wondering if we have become more insular as a society and as I thought about it plenty of supporting evidence came to mind.    My parents knew every neighbour and my mother often got lost in Asda for hours speaking to various people she knew, much to my disdain as a child who did not enjoy being dragged around a supermarket.    My visits however, which are fewer due to online shopping, are in and out of the supermarket with minimal fuss, and as to my neighbours, I know a few to talk to but don’t know many.   Considering online shopping for example, this works due to its convenience and ease but in doing so reduces the opportunities for in-person social interaction and for the accidental introduction or chat which the in-person visit to a supermarket might facilitate.   And that’s why some supermarkets have actually added in-person checkouts back, rather than self service checkout, to try and reintroduce the social side of the weekly or monthly shop.   In fact looking around the common conversations of the past, with people stood in their gardens or outside their houses talking of the weather and their kids, these have now been replaced by argumentative conversations regarding inconvenient parking, dog fouling and children kicking their ball against the fence.   Have we became so obsessed with “stranger danger” that we now don’t seek out or embrace new people as we once did?    Is convenience king such that we want things easy even if it means losing out on opportunities to interact with our fellow human beings?   And have we moved to a “me”, a world focussed on the individual and our rights, rather than the “we”, the world focused on collectiveness, community and our responsibilities?

So what does this have to do with AI?   

Some are worried about the fact AI might see us becoming over reliant on it and that it might see us interacting with other people less often.   These appear logical worries however as I indicated above, these things are already happening.    We are already becoming focussed on convenience;  on demand TV, next day delivery, food delivery services and more.    We are also less likely to engage with others in person through not having to go out for shopping, etc and through the increased amount of time we spend on our screens and devices;   I think at the moment I still average around 2.5hrs per day on my phone, and that excludes the time on my work device so where did those hours come from if we assume I am spending the same time sleeping as people did in the past?   So maybe the issue isn’t going to be AI causing these problems, but AI accelerating them?    But if we take as fact our want for convenience and our want for ease, where in person interactions maybe aren’t easy, isn’t it obvious that we would choose to make use of AI tools to make things easier, to help us with our interactions or to present us with someone, or thing, to interact with without all the complexities of a human to human interaction?    You cant reset a human being however if your chat bot gets disagreeable you can simply reset it and start again.

Conclusion

I suspect AI like other technologies before it will simply magnify and accelerate issues which already exist in society.   Convenience is great, but to have a meaningful existence and to flourish there needs to be a suitable level of challenge, some desirable difficulty.     A focus on yourself is great and safe but it leads to missing out on the warmth and colour of human interactions, albeit they are often messy and complex, but they are a core part of what it means to be human.

Maybe we need to zoom out and forget about AI and take a long hard look at where we are going as a society and as a human race.    I often talk about balance, and maybe that’s what we need most, to look at balance.  

Or if sticking with looking at AI, maybe its to help us speed up some tasks to allow us to focus on other things which are more difficult, that provide the challenge rather than convenience, or which involve interacting with others, in which case the trade-off sounds beneficial.

I do hope these musing strike a cord as I don’t have any answers, only questions, and maybe that in itself is important, in finding the time to explore the bigger questions.

Uses of AI in education – For the student

I have previously posted that student uptake of AI was greater than the case with staff in schools and colleges.    Now my sense is that this is true.   I myself gathered data between Jan 2023 and Apr 2023 in relation to uptake of generative AI tools like ChatGPT and the rate of uptake with students was definitely greater than it was for staff.  A more recent analysis of weekly use in schools, focusing on the big well known AI tools, showed around 83% of the usage on the school network belonged to students.   This got me thinking about students and generative AI and why use of generative AI might be a good thing.

So what might students get from generative AI?

One of the things a teacher offers to a class of students is their knowledge as shared through lessons and the set learning activities.   But the teacher is only available to students fleetingly during their lessons, or occasionally when they are free at other times in the busy school week.    GenAI also has knowledge to offer.   It however benefits from being available anytime and anywhere that students have access to a smart phone, tablet, laptop or computer.   It also benefits from being much broader in its knowledge;   If we were to read the same data as ChatGPT 3.5 has ingested, it would take us 2500 years to do so.   Surely having easy access to knowledge, and such a wealth of broad knowledge, and more often, is a good thing?

And we also need to consider how Generative AI delivers its knowledge.   A library provides knowledge but requires significant time, effort and a bit of skill to traverse.  Google provides knowledge, but we need to get the search terms right plus then dig through the links.  But with Generative AI we can actually have a bit of a dialogue, discussing and finessing our requirements to get that which we want.   Maybe a little like when chatting about an interesting topic with your teacher, but available anytime, anywhere and without other commitments which they need to rush off too?

And another key aspect and feature of GenAI is the often chatbot style with which we interact with it.    As human beings this is one of our key methods of communication to allow us to understand or seek to understand.  We have a dialogue.   We make comments, are corrected, reply and adjust our approach, our thinking and our language.   It’s a two way process, back and forth and that’s exactly how we interact with GenAI such a Gemini, ChatGPT or CoPilot.   It’s very much like the dialogue a student might have with a teacher.

One aspect of this dialogue between teacher and student in schools is that of feedback.   I remember the Hattie research which indicated that feedback was one of the more powerful levers which could be pulled to influence student outcomes.   Now the issue with feedback is always the time for the teacher to provide the feedback and the time taken to create it, however with GenAI students could potentially get feedback as and when they need it, and at every stage of the creation of their work.   Its like having at least part of a teacher on call to provide feedback 24/7. 

This feedback also needn’t be simply limited to feedback on coursework and other submissions.  It can extend to a variety of topics including health, wellbeing, study tips and more, where GenAI can provide some advice and help as and when needed.    AI can help students  get started with work, it can advise regarding interpersonal issues or can help draft ideas or to restructure ideas the student have already identified.   And if you have issues with language, its there to translate, or summarise to help.    Its an IA, or intelligent assistant, there to help and assist, as and when needed.

Its also important to circle back to the broad knowledge set of GenAI as not only is it valuable in its own right, but it also means that quite often prompts generate responses which go beyond that which we expect, opening up other things to consider within the scope of the topic or task we are exploring.    It helps stimulate our creativity and introduces further breadth, plus it also allows us to access other mediums, allowing students to be artists, musicians, poets and more, through the support of generative AI.   Why would any student still believe they aren’t creative, when they have generative AI to help?

Another thing to consider, is where students might find it difficult to talk to an adult or even one of their peers.   It may be that a generative AI based chatbot might be able to help here, providing at least some initial advice and hopefully reassuring students but also pointing them towards appropriate help services and individuals.   I don’t think the AI would provide all the required support however it might just be that starting point that gets that shy or unsure student looking in the right direction for the support they need.   AI could just be that quiet friend with advice and support, positive words of reassurance and more, which a student needs.

It is also a tool to automate things, helping organise and coordinate our lives to make things easier;  It can take notes in lectures, it can update your to-do list and more, and a world where things are getting increasingly busy and hectic, maybe our young need this help more than anyone else.   Its all new to them so they haven’t built the coping strategies that those older than them may have developed, so something that helps take at least a little bit of the busyness out of life would be a good thing.

Conclusion

So is it any wonder that students are using Generative AI.    Students, and the young more generally are experimental where adults, who have been conditioned by a world of systems, processes and rules, are less so.    As such students are more likely to try new things, and as something as shiny and which promises so much, it’s no wonder they are experimenting with generative AI.   And all of the above, in my eyes anyway, is potentially positive and doesn’t even touch on the possible mis-use of GenAI for “cheating” which many are concerned about.    Is it cheating anyway if it helps the students achieve the best they can potentially achieve?   Why would we want students to achieve less?    Is it right to be happy with students being academically honest and achieving a B, when with the help of tools so commonly now used in the world they could achieve an A?    Why is it academically dishonest, or unfair, to try and achieve the best grade by using the tools available, in an educational game which ranks all students in terms of grades irrespective of their individual needs, abilities and disabilities?    If AI produces better outcomes, or reduces stress and anxiety, or improves wellbeing, confidence, etc, then surely it’s a good thing? 

Maybe we need to worry less about the change being brought about by generative AI and worry more about why our education systems are so reluctant to allow for change?

Unleashing AI

It was around a year ago that I had the opportunity to speak at a Keynote event alongside Laura Knight, Dr Miles Berry and Rachel Evans, my fellow ISC Digital Advisory Group colleagues, so it was with some anticipation that I looked forward to involvement in another Keynote event, again including Laura and Rachel, but also including my fried Bukky Yusuf as well as Dina Foster and Dale Bassett.   As with 2023, the event focused on AI in education, and included an opportunity for me to speak on AI literacy for students as well as on the potential for AI to help with efficiency and workload.

So, the opening speaker was Bukky delivering an introduction looking at what AI actually is and to some of the terminology and language which surrounds AI.    She highlighted that AI isn’t new and is something which was being discussed all the way back in the 1950’s plus that, even before ChatGPT burst onto the scene in late 2022, AI was already something we were using in our daily lives in the likes of google maps.    It was interesting as she discussed narrow AI, which is where I think we are now, but also Artificial General Intelligence (AGI) which some predict will be achieved by 2040, and Artificial Super Intelligence (ASI) which is the advancement and the scary situation that would proceed AGI.    If AI achieved AGI, the issue is that it can iterate and evolve far quicker than we can as humans, so once AGI is reached its self-advancement quickly moves beyond human capacity and understanding towards ASI.   We potentially become what ants are to human beings.    Now I hold to the hope here that we are pretty poor at predicting the future, that this is still a couple of decades away and that we will hopefully put some guard rails and mitigation measures in place to ensure we are prepared for this between now and then.   

Next up was Laura, who as always, delivered a thought provoking session which stimulated such broad thought in relation to AI and education.   I loved her discussion of technology strategy metaphors and the dangers of a hot air balloon, fireworks or jet fighter approach, each with its advantages and its drawbacks.   I sense I try to balance the hot air balloon and the jet fighter, seeking to have an overview but while also trying to keep a sense of momentum and direction.   I think I am passed my days of seeking the shiny new thing, the fireworks, although I will note that I certainly did fall into this trap in my early teaching and EdTech days.      Laura also touched on the need to be creative and yet also be an engineer which I think is an interesting challenge as it requires two different types of thinking.

My first session of the day related to developing AI literacy within students, but in fact much of what I said was equally applicable to staff as well as students.   I outlined some of the knowledge which I feel is important, including knowing of the benefits but also the risks and challenges as they relate to the use of AI.   Next I moved onto the skills side of things, and how all the discussion of prompt engineering and the likes paints use of AI out as being complex and technical, when in fact my recent use of CoPilot involved me simply talking to my laptop and CoPilot.   The barrier to entry, to actually having a play with AI is actually so very low than anyone can do it.

In terms of skills I highlighted the need for students, and staff, to be able to think critically and to review and asses content presented to them to identify what is fake or real.   Given the speed with which posts on social media become viral, and the potential for AI to be used to create or manipulate content, whether it is text, image, audio or video, the need for critical thinking is never more key.    I also pointed to the need to consider the ethics in relation to AI tools, using Star Wars and the post death use of James Earl Jones’ voice and Peter Cushings likeness.   Is this ethical?  How do we seek consent or permission?  Are there risks of mis-use?    Data literacy was my next focus, in the fact AI relies on data and therefore we need to get better at understanding what data is gathered, how it is used, how data might be inferred and more.     One of the attendees also raised the issue of the environment, and on reflection, I should have included a slide to this, to the need to consider the environmental impact of the user of genAI.

After lunch the next session was another Laura session this time looking at the safeguarding implications of AI.   This session went into some of the murkier implications of AI including the use of AI imagery and maybe even chatbots to support criminals engaged in sextortion.    She talked about the shame that people feel when they get caught up in technology enabled safeguarding incidents, such as sextortion, and I think the emotional side of things is very important to remember and to consider.   She also raised the issue of some students possibly withdrawing and relying on AI as their friend and confidant, and the implications of this from a privacy point of view as well as from a safeguarding risk point of view where an AI could guide a child towards inappropriate or even harmful behaviour.   The challenge of privacy was also covered, acknowledging that we humans are pretty poor at this often agreeing to app terms and conditions without any consideration for what we have actually agreed to, a challenge that is becoming more and more difficult in my view as we share more information with more apps and services.

My final session of the day focussed on AI and efficiency and also on the possibility it can help to address the current workload challenges in education.  Now Bukky bigged this session up as the “unicorn” session so my first step pre starting the session was to use genAI to get a nice photo of a dog with a unicorn horn on its head;  I simply don’t think anyone has the answers here, or the unicorn, it is just a case of prompting discussion and sharing ideas.   My session was very much about getting attendees to collaborate and share their own idea and experiences.  I have long said the smartest person in the room is the room and this session focussed on exactly that and on getting the audience themselves to share their thoughts and ideas, before I then went on to share some of mine.   One of the highlights for the event as a whole was an attendee picking up on my comment regarding the need to build networks and communities, suggesting that the attendees were themselves now a network and therefore it would be worth seeking to find a way to continue discussion beyond the event;  I highly hope this is something we can get off the ground as I truly believe our best chance to realise the potential of AI, or maybe just to survive the fast paced technical change, is to work together and to actively share and discuss issues or ideas.

The event then closed with a panel session involved myself, Laura, Rachel and Dale.   And before you wonder about if I suffered my usual travel woes, lets just say I stupidly decided to climb the stairs at Russel Square tube station, clearly missing the warning sign.   Approx. 170 spiral stair case steps later and I almost never made the conference the following day!

It was a long but very useful day with lots of things to go away and think on.   I also made use of Otter to record my own presentation with a hope to use this to improve my preparation and my delivery for future events.    I am also hopefully that maybe the attendees will indeed engage with sharing and discussion beyond the event itself, as this is the most likely method in ensuring the discussions and sessions shared bring about the positive change myself and the other presenters would love to see.

Who poisoned the AI?

One of the challenges in relation to Artificial Intelligence solutions is the cyber risk such as that presented through AI poisoning.  When I seek to explain poisoning the example I often use is of an artist who sought to keep traffic away from a particular street.   To do this he simply purchased a number of cheap smartphones, put them in a little trolley and then walked this trolley slowly down the chosen street.    To Google Maps the fact a number of smartphones were progressing very slowly down a street was interpreted as a traffic jam or accident and therefore Google maps sought to redirect people away from the street.   Basically, the individual had poisoned the AI data model to bring about a generally unwanted outcome, at least from the point of view of Google Maps.

Poisoning might take a number of forms, such as through the input data received by the AI such as the position information from the phones, or through the prompts made to a generative AI solution or through the training data provided, including where this might include the prompts.    The key is that the AI solution is being manipulated towards an output that wouldn’t normally be anticipated or wanted.  And there are also concerns from a cyber security point of view in relation to poisoning being used to get AI solutions to disclose data.

That said I previously read an article in relation to AI poisoning but where the poisoning was being presented as a solution to a problem rather than a risk.   In this case the problem is ownership and copyright of image content, where an AI vendor might use such image content, scraped from the internet often without permission or payment to the creator, and used to train the AI.    The concern from copyright owners and artists is that they are creating works of art, images, etc, but as generative AI solutions are fed this data, the AI solution either copies elements of their works, or could even be asked to create new works, but in their style.   And given the creator is receiving no remuneration for the use of their works in training an AI, plus that the AI might lead them to receive less business, they are concerned.   Roll in Nightshade, a solution for poisoning an image.   Basically, what the solution does is to change individual pixels within an image, where this isnt perceptible to the human eye, but where it will influence an AI solution.   The poisoned images therefore negatively impact the functionality of AI solutions which ingest them into their training data, but while still be totally acceptable from a humans point of view.

The above highlights technology and AI as a tool;   Poisoning can be used for malicious purposes but in this case can be used positively to protect the copyright of image creators.    The challenge however is that this technology for poisoning images will likely lead to AI solutions either capable of identifying and discarding poisoned images or AI solutions which are tolerant to poisoned images.   It will end up as a cat and mouse game of AI solutions vendors vs. copyright holders.    This is much like the cat and mouse which is the tech vendors seeking to create generative AI solutions which create near human like content versus the detection tools seeking to detect where AI tools have been used.   Another challenge might be the malicious use of poisoned images to disrupt AI solutions such as the feeding of poisoned images into a facial recognition or image recognition solution in order to disrupt the operation of the system.

I also think it is worth stepping back and looking at us as humans and how poisoning might work on human intelligence rather than artificial intelligence.   One look at social media, one look at propaganda and at the Cambridge Analytica scandal shows us that poisoning of intelligences, such as human intelligence, isn’t something new;  I would suggest fake news is a type of intelligence poising albeit possibly at a societal level.    Poisoning has been around for a while and I am not sure we have a solution.   So maybe rather than looking at how we deal with or positively use the poisoning of artificial intelligence we need to go broader and consider poisoning of intelligence in general, including human and artificial intelligence?  

References

This new data poisoning tool lets artists fight back against generative AI, Melissa Heikkilä (2023), Technology Review, Downloaded 07/11/2023

Berlin artist uses 99 phones to trick Google into traffic jam alert, Alex Hern (2020), The Guardian, Downloaded 07/11/2023

Bias (AI and human)

I was thinking a little bit further about bias in artificial intelligence systems. It is so easy to demonstrate and I, myself, have shown it in image generation tools with a simple request for a doctor or a nurse in a medical setting; The various bias in the resultant images are very clear indeed. But I think it is also important that we acknowledge the inherent bias in us as humans.   We get by despite this inherent bias yet seem to be unwilling to accept similar bias in AI solutions, so why is this the case?   What is the difference?

I thought a good place to start was to ask ChatGPT for its thoughts regarding the differences in bias between AI systems and humans.    It created a list of sources of bias for AI systems and a corresponding list for humans as below:

AI and Bias

  1. Training Data: AI systems learn from data, and if the training data is biased, the model can inherit and perpetuate those biases.
  2. Algorithmic design: Bias can also be introduced through the design of the algorithms themselves, including the choice of features, learning objectives, and optimization processes.
  3. Unintentional amplification: AI systems may unintentionally amplify and propagate existing biases present in the training data, leading to unfair or discriminatory outcomes.
  4. Lack of consciousness: AI systems lack consciousness and awareness. They don’t possess intentions or beliefs, so any bias they exhibit is a result of patterns learned from data without an understanding of the social context.
  5. Automated decision making: Bias in AI can lead to automated decision-making that affects individuals in various domains, such as hiring, lending, and criminal justice.

Reading the above, items 1 and 3 feel like a repetition, while item 4 begs the question of what is consciousness and how might we be able to evidence it in a future Artificial General Intelligence (AGI) solution?    This is a question I will leave for now or will leave for brighter people than me.  Item 5 is less about the source of bias and more about the impact it can have on humans.

Human Bias

In terms of humans ChatGPT listed the below:

  1. Human bias is often rooted in subjective beliefs, attitudes, and cultural influences. Humans can be consciously biased based on personal experiences, opinions, or societal conditioning.
  2. Humans can exhibit bias with intent, driven by personal beliefs, prejudices, or discriminatory motivations. This may not be the case for AI, which lacks intentionality.
  3. Unlike AI systems that learn from data in a structured manner, humans can adapt, unlearn biases, and consciously strive to be more objective through education and self-awareness.
  4. Humans possess a deep understanding of social, cultural, and contextual nuances, allowing for more sophisticated decision-making that takes into account the complexity of real-world situations. AI systems, on the other hand, may struggle with nuanced or context-dependent scenarios.

Considering A and subjective beliefs, attitudes and cultural difference, arent these the training data we as humans are provided, which shapes our neural pathways and our actions?   This is your upbringing, parenting, friends, local and national culture and values, etc.   We are exposed to this experiential training data throughout our lives, where an AI can be provided similar training data in a far shorter period of time.     Item B then comes from A in the same way as an AI’s bias might come from its training data or algorithmic design.    And I note the design of human beings, as influenced and evolved over time, has resulted in some design features which are sub-optimal in the modern world.  Take for example the fight or flight response kicking in during a heated discussion;   In the past all the relevant hormones released by fight and flight would be used up in the resultant fight or in running away from the teeth and claws of a predator, whereas in the boardroom these hormones have nowhere to go.   Does the boardroom really merit an increase in heartbeat and respiration?  And that’s before I dip into the availability bias, halo effect and a number of heuristic shortcuts we subconsciously use.

Items C and D, in my opinion, provide an overly positive view of us humans and our ability to unlearn bias and show a “deep” understanding.    Yes this may be possible however it isnt easy as humans may be unaware of their bias or bias might play into their perception of their understanding;   Take for example the confirmation bias where we might simply pick the facts or information which aligns which our view, discarding or undervaluing other counter facts or information.

It was at this point I considered AI and Humans and found myself noting the plural humans;  Maybe this is the key.    Humans work together where an AI solution is a single entity and maybe this is where bias diverges in its impact between humans and AI.    If we can gather a diverse group of human individuals this diversity can actively work towards identifying and removing bias.   An AI solution, as a single entity it doesn’t benefit from access to others, it simply takes the prompt and kicks out a response. 

But maybe we could look to multiple AI solutions working together?  Maybe it is a number of AI’s working together, working alongside humans?   I have frequently talked about IA, and AI as an Intelligent assistant, and maybe this is where the answer lies in an AI, with its bias, and a human, with its bias, working together and hopefully cancelled out each other’s bias?

Conclusion

I think its important that anyone seeking to use generative AI is aware of the inherent bias that may exist within such tools.   That said, I think the narrative on AI bias is rather shallow and limited, focusing on pointing out the shortcomings of AI in relation to bias, without considering the bias which exists in ourselves as humans.     I think we need to get more nuanced in our discussions here and look towards how we might address bias in general, whether it be AI or human related.

Thoughts on a new academic year

As a new academic year begins, this being my 26th academic year (has it been that long??) I just thought I would share some thoughts and maybe predictions.

Artificial intelligence

I don’t see the discussion of artificial intelligence in education going away as there is such potential.  The use of AI to support students, to help teachers and rebalance workload and much more.    It also makes for a good talking point for conferences or for developments.    I have two problems though.   One being that I think there will be a lot of talk, especially from vendors, without the reliable evidence supporting the impact and benefit of their tools.    As such I feel there will be a lot of misdirection of effort and resources when looking across schools in general.    Two is that artificial intelligence is all well and good, but it needs the relevant access to devices, to infrastructure, to support and to trained and confident teachers.    These digital divides need to be addressed before schools in general can then seek to use AI and leverage its potential benefits.

Online Exams

The issue of online or digital exams feels partly related to the sudden growth in AI and the resulting potential for AI marking of student work and therefore for AI based marking of student exams.    Again, I see this as another talking point for the year ahead but again am not sure we will see much real progress, possibly seeing less progress in this area than in AI.     The issue is that exam boards are taking things very tentatively so there first step will be “paper under glass” style exams which simply take the paper version of an exam and digitise it rather than seeking to modify the exam or examination process to benefit from the new digital medium.    For me the key benefit of online exams will be realised when they are adaptive in nature so can be taken anywhere and at any time.   This then means that schools wouldn’t need access to hundreds of computers for their students to sit an English GCSE exam as the students could sit the exam in batches over the day or over a number of days.    This would help towards the digital divides issue as it impacts online exams as schools wouldn’t need as many devices, but they would still need the infrastructure and the support to make digital exams work.

Mobile Phones and Social Media

Oh yes, and then there’s this old chestnut!   I suspect the phones and social media discussion will trundle on.   Students are being given phones without any parental controls and then schools are having to deal with this.   And some schools are taking the prohibition approach which is unlikely to succeed and may just deplete patience and resources.   I continue to believe we should be seeking to manage student mobile phones in school, so might restrict use in some areas and at some times but embrace and use them at other times.   We need to spend time with students talking about social media and its risks and benefits helping to shape the digital citizens which the world needs.

I also note here that social media is being blamed for the lack of focus and ease of distraction in students, and through association it is the fault of smart phones.    The world isn’t that simple, and having recently finished reading Stolen Focus by Johann Hari I am not more aware that other factors such as increasing levels of societal pressure to succeed, increased consumption of processed foods and our on-demand culture are all having an impact on our children.    Yes, social media, and by extension smart phones are playing their part but they are not the root and sole cause of the issues in relation to attention which we are seeing in schools and more broadly with children.

Fake news and deepfakes

This links to AI and also to mobile phones and social media, in the increasing ease with which fake news content can be convincingly developed including the use of images and video, and then shared online.    As fake news becomes an increasing issue, which I suspect the US elections will draw some focus on, there will be an increasing need for schools to consider how they discuss and address this challenge with their students.   More locally within education and within schools will be where we start to see increasing use of AI tools to create “deepfakes” by students and involving other fellow students, either “just having a laugh” or for the purposes of bullying.     This will be very challenging as the sharing of such content will quickly stretch beyond the perimeter of schools, spread through social media, messaging apps and the like, but where the victim and likely the perpetrators will be within the school.   

Wellbeing

This one came to me last, but if I was re-writing this I would likely put it first.   We talk about wellbeing very much but every year we look to see if the exam grades have gone up and are faced with increasingly compliance requirements around safeguarding or attendance or many other areas.    Improvements in results, or even the efforts to improve results mean more work, which means more effort and more stress.    More compliance hoops equally mean more effort and more work.    So how can we address wellbeing if educators are constantly being asked to do more than they did previously.   And exam results and compliance are just two possible examples of the “do more” culture which pervades society possibly driven by the need for economic and other growth as something to aim for.    Although growth and improvement is something laudable to seek, it cannot be continuous over time, not without deploying additional resources both in terms of money and human resources.    As such there needs to be a logical conclusion to the “do more” culture and my preference would be for us to decide and manage this rather than for it to happen to us.    AI can help with workload for example giving more time for wellbeing however my concern here is that this frees up some time to simply do more stuff, albeit stuff which might have an impact, but not positively on wellbeing.

Conclusion

The above are just five areas I see being cornerstones of educational discussion in the academic year ahead.   I suspect other things will arise such as equity of opportunity, although I note this links to pretty much all of the above.   There will also be other themes which arise but it will be interesting to see how these particular five themes develop during the course of 2024/25.

And so with that let me wish everyone a successful academic year.    Let the fun begin!