Should AI be held to higher standards than humans?

Darren White posted an interesting question on twitter the other day in relation to the standards we hold AI to.    Should AI be held to higher standards than humans? This is something I have been given some thought to due to both having an interest in human heuristics and bias, plus an interest in artificial intelligence. 

Discussions on AI

There is already a lot of discussion regarding issues and challenges related to AI including discussion of bias and inaccuracy or “hallucinations”.    I myself have been able to recreate these two issues reasonably easily within generative AI solutions, firstly asking an image generation solution to create a picture of a nurse in a hospital setting and then a doctor in a hospital setting.   In this case the images were all of white individuals with the nurses all female and the doctors all male.    The evidence of bias was clear to see.    And in a separate experiment with a tool to help with report writing, the developer forgot to provide any data in relation to the fictitious student for which a report was being created but the tool simply made the report content up.    These issues are therefore clear to see and it is easy to jump to a standpoint where bias needs to be removed and inaccuracies or hallucinations stopped.

A human view

One of the issues here is that I believe we need to take a cold hard look at ourselves, at human beings and how we might respond to prompts if such prompts were direct at us rather than an AI.   Would we fair so much better than and AI?    I have a lovely poster in my office in relation to the cognitive biases which impact on human decision making and there has been plenty written about this and heuristics, with Daniel Kahneman’s book, Thinking, fast and slow, being one of my favourites.   A key issue here is that we are often not aware of the internal or “fast” bias which impacts on us and therefore may assess our biased decisions as being absent of bias.     In terms of hallucinations, again we humans suffer the same issue often stating facts based on memory, and holding to these facts even when presented with contradictory evidence.   The availability and confirmation biases may be at play here.    Another challenge when comparing with AI is that our biases and hallucinations are not clear for us to see, albeit they may be clear to others, yet with AI bias and hallucinations, at least in the form of those raised as examples, it is clear for all to see.  

End point?

I would suggest that in both AI and in human intelligence our ideal would be to remove bias and inaccuracy.   I would also suggest although this is a laudable aim it is also impossible.    As such, rather than focussing on the end we need to focus on the journey and how we might reduce the bias and reduce the inaccuracies both in humans and in AI.    It may be that in reducing bias in humans this may benefit AI, however it may also be possible that things work the other way and discoveries to help reduce bias in AI may help with bias in humans.   I note that a lot of human thinking, especially our fast thinking, can be reduced to heuristics or “generalisations” or “rules of thumb”;  How is this much different to the quick processing of an generative AI solution?  Does generative AIs probabilistic nature not tend towards quick creation of generalisations but based on huge data sets?

The future

So far, I have avoided getting pulled into the future and artificial general intelligence and I mention it for completeness only.   This will likely arrive in the future and most who claim to be AI experts seem to agree with this however there is much disagreement as to the when.   As such our immediate challenge is that of the generative AI we have now and its advancement over the creation of an AI solution capable of more generally out thinking us across different domains;  That said I would suggest that in a number of ways generative AI can already out perform us across many domains.

Conclusion

So back to the question in hand and whether we should seek to hold AI up to higher standards?    We should seek to avoid outcomes which have a negative impact on humankind so bias and inaccuracy and also the other challenges in relation to intelligence, such as equality of access to education, etc, are all things we should seek to reduce.    This I think is a common aim and can be applied to humans and AI.   In terms of the accepted standard, I think it is currently difficult to hold AI up to a higher standard than we hold humanity given the solutions are created by humans, trained on human supplied data and used by humans.   It may be that in AI solutions you get a glimpse of how entrenched some of our human biases actually are.   That said I also think it might be easier to remove bias and inaccuracies in an AI solution as compared to doing the same with a human;  I doubt the AI will seek to hold onto its position or to counter argue a view point, at least not yet.

AI and assessment (Part 1)

I recently spoke at an AI event for secondary schools in which one of the topics I spoke on related to AI and its impact on Assessment.   As such I thought I would share some of my thoughts, with this being the first of two blogs on the first of the sessions I delivered..

Exams

Exams, in the form of terminal GCSE and A-Level exams still form a fairly large part of our focus in schools.  We might talk about curriculum content and learning but at the end of the day, for students in Years 10,11, lower 6 and upper 6 the key thing is preparing them for their terminal exams as the results from these exams will determine the options available to students in the next stage of their educational journey.   The issue though is that these terminal exams have changed little.   I provided a photo of an exam being taken by students in 1940 and a similar exam in recent terms and there is little difference, other than one photo being black and white and the other being colour, between the photos.   The intervening period has seen the invention of DNA sequencing, the mobile phone, the internet and social media, and more recently the public access to generative AI but in terms of education and terminal exams little has changed.

One of the big challenges in terms of exams is scalability.  Any new solution needs to be scalable to exams taken in schools across the world.  Paper and pencil exams, sat by students across the world at the same time accommodates for this.  If we found life on Mars and wanted them to do a GCSE, we would simply need to translate the papers into Martian, stick the exams along with paper and pencils on a rocket and fire them to Mars.   But just as it is the way we have done things and the most easily scalable solution doesn’t make paper and pencil exams the best solutions.   But what is the alternative?

I think we need to acknowledge that a technology solution has to be introduced at some point and the key point is the scalability based on schools with differing resources.   As such we need a solution which can be delivered in schools with only 1 or 2 IT labs, rather than enough PCs to accommodate 200 students being examined at once as is the case with paper based exams.  So we need a solution which allows for students to sit the exams in groups, but without compromising the academic integrity of the exams where student share the questions they were presented with.    The solution, in my view is that of adaptive testing as used for ALIS and MIDYIS testing by the CEM.   Here students complete the test online but are presented different questions which adapt to students performance as they progress.   This means the testing experience is adapted to the student, rather than being a one size fits all as with paper exams.    This helps with keeping students motivated and within what CEM describe as the “learning zone”.   It also means as students receive different questions they can sit the exam at different times which solves the logistical issue of access to school devices.   Taken a step further it might allow for students to complete their exams when they are ready rather than on a date and time set for all students irrespective of their readiness.

AI also raises the question of our current limited pathways though education, with students doing GCSES and then A-Levels, BTecs or T-Levels and then onto university.    I believe there are 60 GCSE options available however most schools will offer only a fraction of this.    So what’s the alternative?    Well CalTech may provide a possible solution;  They require students to achieve calculus as an entry requirement yet lots of US schools don’t offer calculus possibly due to lack of staff or other reasons.   CalTechs solution to this has been to allow students to evidence their mastery of calculus through completion of an online Khan Academy programme.   What if we were more accepting of the online platforms as evidence of learning and subject mastery?   There is also the question of the size of the courses;   GCSEs and A-Levels and BTec quals are all 2 years long but why couldn’t we recognise smaller qualifications and thereby support more flexibility and personalisation in learning programmes?   In working life we might complete a short online course to develop a skill or piece of knowledge on a “just-in-time” basis so why couldn’t this work for schools and formal education?  The Open University already does this through micro credentials so there is evidence as to how it might work.   I suspect the main challenges here are logistical in terms of managing a larger number of courses from an exam board level, plus agreeing the equality between courses;   Is introductory calculus the same as digital number systems for example?

Coursework

Coursework is also a staple part of the current education system and summative assessment.    Ever since Generative AI made its bit entrance in terms of public accessibility we have worried about the cheating of students in relation to homework and coursework.    I suspect the challenge runs deeper as a key part of coursework is its originality or the fact that it is the students own work but what does that look like in a world of generative AI.    If a student has special educational needs and struggles to get started so uses ChatGPT to help start, but then adjusts and modifies the work over a period of time based on their own learning and views, is this the students own work?   And what about the student who does the work independently but then before submitting asks ChatGPT for feedback and advice, before adjusting the work and submitting;   Again, is this the students own work?  

There is a significant challenge in relation to originality of work and independent of AI this challenge has been growing.   As the speed of new content generation, in the form of blogs, YouTube videos, TikTok, etc, has increased year on year, plus as world populations continue to increase it become all the more difficult to be individual.  Consider being original in a room of 2 people compared with a room of 1000 people;    The more people and the more content, the more difficult it is to create something original.   So what does it really mean for a piece of work to be truly original or a students own work?

The challenge of originally and students own work relates to our choice of coursework as a proxy for learning;   It isnt necessarily the best method of measuring learning but it is convenient and scalable allowing for easy standardisation and moderation to ensure equality across schools all over the world.   It is easy to look at ten pieces of work and ensure they have been marked fairly and in a similar fashion;  Having been a moderator myself this was part of my job visited schools and carrying out moderation of coursework in relation to IT qualifications.   If however generative AI means that submitted content is no longer suitable to show student learning, maybe we need to look at the process students go through in creating their coursework.    This however has its own challenges in terms of how we would record our assessment of process and also how we would standardise or moderate this across schools.

Questions

I don’t have a solutions to the concerns or challenges I have outlined, however the purpose of my session was to stimulate some though and to pose some questions to consider.    The key questions I posed during the first part of my session were:

  1. Do we need an annual series of terminal exams?
  2. Does there need to be [such] a limited number of routes through formal education?
  3. Why are courses 2+ years long?
  4. Should we assess the process rather than product [in relation to coursework]?
  5. How can we assess the process in an internationally scalable form?

These are all pretty broad questions however as we start to explore the impact of AI in education I think we need to look broadly to the future.    In terms of technology the future has a tendency to come upon us quickly due to quick technology advancement and change, while education tends to be slow to adapt and change.    The sooner we therefore seek to answer the broad questions or at least think about them the better.

Autumn term blues

We are now in the 2nd half of the autumn term and I cant believe where the time has gone.    We had the usual build up ahead of the start of the new academic year, followed by the unsurprisingly manic start of term.   The start of term in schools and colleges is normally manic as new students and staff join and as everyone tries to quickly get back up to speed following the summer break, trying to establish the positive habits which should underpin the year ahead.    For me, the first half of this years autumn term was made all the busier due a number of events which I had agreed to attend or contribute to, such as a couple of industry cyber security events and speaking at events in Leeds, London and Amsterdam.   Each of these events were really useful however the travel and preparation work related to the events add to the stress and pressure.   Its worthwhile, and I certainly take much from each of the events, the ANME/Elementary Technology AI and EduTech Europe events in particular, but it isnt half tiring.

It was therefore no surprise that I reached the half term feeling very drained and run down but having quite a bit to catch up on before the planned period of rest towards the end of half term.   And this is where sod-law kicks in.    Just as I get the time to regroup and to rest, illness shows its head.   Why is it that just when you get time to enjoy yourself and relax, that you end up ill?    Now I suspect part of the answer is the fact that, when busy, adrenaline carries you through and keeps you going however as soon as you see the light at the end of the tunnel, as soon as you take your foot off the gas and your body and mind relax a little, the bugs, the viruses and the general malaise set in.   And so it was that I spent a fair amount of the half term period working on, as us IT people need to do in school holiday periods, while feeling less than 100%.   When I did get a few days off to relax the time was largely spent in bed or crashed out in front of the TV with little energy and a persistent cough.

Before I knew it, the 2nd half of the term had begun and the opportunity to spend some proper time on wellbeing and mental health has passed me by.    So with the 2nd half of the term now fully back in the swing of things, it is once again time to put the foot to the floor and proceed towards Christmas (bah humbug 😉) .    At this point I still don’t quite feel 100% but I am definitely better than I was during half term and for now I hope I can get to Christmas, and pass into the festive holiday period without any further illness.   But only time will tell.

The challenge we all have is in accepting that life and work is not linear;  There will be periods where things are manic and busy, and where mental health and wellbeing will take 2nd or maybe 3rd place, however equally we need to seek a balance which means there will need to be times when mental health and wellbeing come first, even when this is at the expense of other things.    For me, the manic autumn term just means I need to ensure I put time aside for myself, either at Christmas or at some point in the spring of summer terms, putting myself first over other pressures.  

Onwards and upwards as they say, and also let me share an important message with all my colleagues in schools and colleges;   make sure to look after yourself as unless you are well, physically, mentally, cognitively, etc, you won’t be able to effectively help, look after, teach or otherwise support others.    Take care and good luck for what remains of the autumn term!

Cyber Awareness Month: Cyber threats

October is cyber awareness month and an important opportunity to discuss and highlight cyber security and cyber threats.   Now cyber security and particularly the development of a culture of positive cyber security practices is an ongoing  requirement, however cyber awareness month provides a valuable chance to highlight cyber security and ensure it is the subject of discussion.    Due to this I would briefly like to share some of my thoughts in relation to the main cyber threats as they current exist for schools and colleges.

Phishing, vishing  and other “ishing” attacks.

For me, phishing and similar attacks based on SMS, messaging services, social media, phone calls and even malicious QR codes continue to be one of the most common attacks aimed either at compromising a user account or at compromising a target machine through malware.   One of the big issues here is that we ae living in an increasingly busy world dealing with ever increasing numbers of emails, messages, etc.   And in this busyness it is “human to err”, to click a malicious link, to reply to a malicious email or provide user credentials to a convincing looking, but fake, login page.    Continued user awareness training can help in this area, making users more aware of the signs to look for in malicious messaging, but it can only go so far especially as people are becoming increasingly busy.   For me, the key is for users just to have a fraction more time to review messages before acting, giving their conscious brain just that bit more time to engage and identify the unusual features of a malicious email, message or call.    I am not talking about huge amounts of time, only fractions of a second.   That said this time needs to come from somewhere in a time bounded world so we are going to need to make some compromises to fine this time as otherwise we are only likely to see data breaches resulting from phishing and other “ishing” style attacks becoming both more common and more significant in their impact.

Third parties

We are increasingly using more and more third parties, including online tools, in our lives and in our schools, whether this is a cloud hosted MIS, a learning platform, quizzing app, website provider or a multitude of other solutions providers.   In each third party there is an additional risk.   And this risk is two-fold.    One part relates to an incident on this third party resulting in school data being breached, where the school as data controller, remains responsible.    The other part of this issue relates to the use of a third party to gain access to a schools systems, possibly through a business email compromise attack having gained access to a compromised email account within a third party, or it could involve using integration between the third parties solution and school systems.   Either way, I see third parties as the 2nd most significant risk which schools are exposed to.   Due diligence is key here in terms of ensuring appropriate checks are done on vendors in terms of their approach to security, etc, although I note these are often only superficial in nature in the information third parties may provide via their policies or via response to direct queries.   I suspect the other solution is simply least privilege and both limiting the access of third parties to school systems, plus in trying to limit the total number of third parties used.   Sadly this is often easier said than done.

Conclusion

Given the above as to the two main risks as I see them, and the acceptance that a cyber incident is a matter of a when rather than an “if” scenario, it therefore makes sense to play out the above scenarios as desktop exercise to consider how your school might respond.    Phishing can also be easily tested for through the use of a phishing test campaign, sending out a fake phishing email to see how users respond.   I would suggest in both of the above scenarios there isnt a huge amount schools can do to prevent an incident, although I will once again state the importance of doing the basics in terms of cyber such as using MFA, patching, least privilege, taking and testing backups and performing regular user awareness training.   So, if there is limited opportunities for preventative measures beyond the basis, then the key thing is to prepare for the most likely threat scenarios.   How would you respond to a compromised user account resulting in MIS data being exfiltrated for example or to a third party data solution suffering a data breach resulting in school data being leaked publicly?   Would police be involved?  What would you tell the press, parents and the wider community?   How would your school respond internally, including who would be involved in discussions around actions and who would have the authority needed to approve comms, etc, plus what roles would each person undertake?   And how might you deal with wellbeing and mental health during a high stress incident?   

It is better to consider these and other questions now, than waiting and having to answer them during an incident.    And maybe this is one aspect of cyber awareness month we neglect;  It isnt just about preventative measures and reducing the likelihood of an incident, it is also about acceptance that incidents will happen and therefore spending some time planning and preparing.

TEISS London 2023: Reflections

During September I managed to find myself in two industry level cyber/info security conferences, one of which I have already blogged about (See here).   This post focusses on the other event, being the TEISS London 2023 event which was a little more focussed on incident management rather than the previous event which was a little more generic.   So, what were my take-aways as relevant to education?

Incident Response

One of the key discussions across this particular event was in relation to the inevitable cyber incident and therefore the need to prepare.    Discussions arose around desktop exercises, the development of incident response playbooks and disaster recovery plans.    The key take-away for me was in the need to play through potential cyber incidents and to do this regularly.   We are not talking about once every few years, but as often as can be managed so that the relevant staff, both senior and IT technical, know how to respond when the inevitable issue arises.    It was also discussed, the need to carry out these desktop exercises with different groups of individuals in order to ensure that all are prepared.   Desktop exercising is definitely something I want to look towards repeating in the coming months, and building a process so that it doesn’t occur ad-hoc but more as part of  a regular process allowing for the review and improvement of the related processes with each test.

Concerning external factors

One of the presenters went into the risks associated with geopolitical issues, where issues in the geopolitical space often result in corresponding issues in the cyber arena.  From a schools point of view it is easy to wonder why this makes a difference;  Why would a nation state or similar focus on education?    I think the issue here is not so much an attacker focussing on education, but on the collateral damage which might impact education.   Now this collateral damage might be accidental however we also need to acknowledge the increasing use of cloud services;  This often means data and services hosted in various countries across the world so what is the potential risk where countries have disagreements and where some aggressive activity online results.   It is easy to say your school exists in Europe or the UK so this is unlikely however the presenter demonstrated some  aggressive cyber activity even within the UK and EU, so it therefore isnt unpredictable that this may happen again in the future.    For schools this means, as far as I am concerned, that we need to continue to do the basics plus prepare to manage an incident when it occurs.

Artificial Intelligence

AI once again factored in the discussion however at least one presenter suggested that where we are now is more akin to Machine Learning than AI.   I suspect this depends on your definition of both terms, with my definition having ML as a subset of AI.    The key message here was that the current instance of AI, generative AI, presents rather generic responses but quickly.   Its benefit, whether used for defence or attack, is its speed and ability to ingest huge amounts of data, however it is only in pairing with a human that it progresses beyond being “generic”.   In the future this may change, as we approach the “singularity” however for now and the near future AI is an assistant for us and for criminals, but doesn’t represent a significant innovative change in relation to cyber security;  good security with AI is little different to good security prior to generative AI.

Human Factors

The human factor and culture were a fair part of the discussion.    The cyber culture and “the way we do things around here” in relation to information security is key.   We need to build safe and secure practices into all we do and at all levels;  Easier to say than it is to do.    This also links to the fact that humans, and the wider user group which in schools would be students, staff, parents, visitors and contractors among others, continue to be involved in around 74% of breaches.   This means it is key that cyber security awareness training needs to hit all of these users and be regular rather than a once a year.    Additionally, if we assume we will suffer a cyber incident, how do we protect our IT staff and also those senior staff involved in incident response and management.   The stress levels will be very high, and as a result self-care may be lacking, but schools and other organisations have a duty of care for their staff, and during a cyber incident that duty of care may become all the mor important.   This is why, in my team anyway, I am introducing a role of “chief wellbeing officer” as part of our incident response plans.

Conclusion

The organisations at this particular event, similar to the previous cyber event, were generally large corporate entities yet for me the messaging may be all the more important for schools given we hold student data and student futures in our hands, and given the targeting of educational institutions.  How do we get more schools to attend these events?    I suspect events like these fall into the important but not urgent, where fixing a server issue or a device issue in a classroom is urgent and important, but then how do we ensure that school IT staff are prepared and preparing for cyber incidents?   Chicken or the egg issue maybe?   

Cyber incidents are inevitable and I have always said that “the smartest person in the room is the room” so if we can share with industry where I believe they have much more experience in this arena, then maybe we, as in schools, will be all the better for it.

Digital Citizenship

Its digital citizenship week this week so I thought I would share some thoughts. Now, I have discussed and raised the issue of the need for more time in schools to discuss digital citizenship.   Whether it is discussing the increasing need to be aware of cyber risks, or the increasing amount of data we are now sharing online or the increasing risk of our behaviours being influenced and manipulated by the tech tools we use, they all need discussion.    Schools and colleges are looking to prepare students for the uncertain, but clearly digital futures they face, but still the focus is on narrow coverage of “online safety” when the risks now extend way beyond the content being covered.

And all of this is before generative AI made its appearance and became so publicly available late in 2022.  Suddenly fake news is much easier to accomplish through generative AI tools that can easily modify content in terms of the video or audio, both being quick to achieve and also to achieve convincingly.    Suddenly the phishing emails which were often laden with spelling errors or design issues, can be fed through a generative AI solution such that the resultant output is convincing in its styling plus free from grammar and spelling errors.   In terms of influencing people through social media, generative AI allows for content creation to be automated with each piece of content being “unique” but with the common influencing message, far quicker than was possible previously.    We also have the issue, that as we all start to use more and more AI, such as the excellent generative AI tools available, we leak yet more data online, where the generative tools online are more powerful than ever in inferring yet further data.   At an event I attended recently it was suggested that if you fed your prompts from generative AI back into a generative AI solution and asked it to profile you it would do decent job of working out things like age, career, education, etc just based in the info you already put into generative AI tools.

So maybe post the free availability of ChatGPT and subsequently of so many other AI tools, or tools where generative AI such as ChatGPT is embedded, it becomes all the more important to discuss digital citizenship with our students.   And maybe generative AI, if it frees educators up from the more administrative and basic tasks of education, provides both the issue and the solution.  Maybe if generative AI and the AI solutions yet to come free us up from the mundane and the basic, maybe it will finally provide time and resources to cover digital citizenship at a time where it may be all the more important.

The path of the world is towards increasingly digital lives with the pace of digital technology advancement being quick.   Regulation and governance is slow by comparison leaving us with a need to fill the void.   I don’t have the answers for the future although I am positive as to the potential of technology to aid, enhance and even redefine our lives, however with this there is always a balance and therefore risks and challenges.   This is where digital citizenship in schools comes in, in providing opportunities for the risks and challenges, both current and potential future risks and challenges, to be discussed and explored.   We need to develop students who are aware and questioning of technology implications, rather than students who blindly adopt technology without consideration for the future.   I believe we have a long way to go to address this issue but every step, every additional discussion, every assembly, every lesson including reference to digital citizenship being an additional step in the right direction.

Image courtesy of Midjourney

EduTech Europe 2023

Its been a while since I have had to fly out to present at a conference, with the last time being almost 10 years ago flying to Kuwait from the UAE to present, but recently I found myself in Amsterdam presenting a cyber session at the EduTech Europe event.    I suppose this means I can claim to be an international speaker for all that might be worth!   In terms of the event itself I found it to be very useful indeed so I thought I would, as I have for other events, share my thoughts.

Education and Disruption

There was a fair contingent of UK EdTech experts and gurus at the event and it was great to catch up with many of them and to watch their various sessions.   This continues to be one of the big reasons for events like this, in the networking and opportunity to share thoughts and ideas, however I think EduTechEurope did particularly well at this as there seemed to be more time to allow for discussion.

Of particular note was some discussion with Gemma Gwilliam and Emma Darcy in relation to the education system as it currently exists.   Emma referred to “brave leadership” in her session, in response to a question from the floor relating to how the current curriculum doesn’t prepare students for the digital world which exists and which lies ahead.   This struck me as highlighting that those schools seeking to do the right thing for their students are often having to break away from the established education system.   In Emma’s case one aspect of this was re-imaging the school day and timetable to make time available for digital and the things that matter even when these are not within the curriculum or something the current education system seeks to develop or assess.   Over lunch on day two, Emma, Gemma and myself had a really interesting discussion as to how we as a group, along with some others, might seek to support the “breaking” of education through constructive disruption.    I left the event feeling energised and excited from the discussions and look forward to sharing the progress we as a group make over the coming months, and possibly ahead of EduTechEurope 2024.

Digital Strategy

Digital strategy in schools has been discussed often over the last 5 or 10 years so isnt something new.   The pandemic also brought the importance of digital solutions to the forefront further stimulating the discussion however it was refreshing to hear discussion of an often forgotten aspect in relation to wellbeing.   Technology allows us to do more and to be more efficient and quicker, but does this “doing more” have a negative impact on wellbeing and on staff workload?   The wellbeing aspect of digital strategy is something we need to explore much more, as is the challenge regarding the “additive” approach to education which has seen us forever seeking to get better, which is fair and enviable, but at the expense of increasing workload and challenges around wellbeing and mental health.

AI in education

Laura Knight delivered her usual high quality and thought provoking session, this time on AI in education.   She explored the benefits and challenges in relation to AI which is something I myself have explored in some of my recent presentations.   The most interesting part of her presentation for me though was her discussion on agility and education.    This is something I see as a key challenge given education has changed little over the last few centuries, albeit the whiteboard has been replaced by a projector or panel.   This is against a backdrop of rapid technological development.    We have therefore needed to re-evaluate our current education system for some time, including how we assess students, and how students progress through formal education.   It may be that AI will now prove as a potential catalyst to make this change happen.   It is also likely that in the first instance we will need the “brave leadership” and the positive disruption from a small number of schools to lead the way for schools and colleges in general.

Cyber Resilience

The panel session I was involved in looked at “cyber-proofing your school” and it was great to be on such a diverse panel chaired by Abid Patel.Some really good discussion with two particular notable points for me. One being the fact that schools dont and will not ever have the finances and other resources to cyber “proof” our schools so we can only focus on doing the basics and preparing for an incident. The second take away for me was the fact the session was the last session of day 1, with this often being the case for cyber security awareness sessions, given limited time and placed at the end of an inset day; We need to realise that the potential impact of a cyber incident means we need to move cyber up the pecking and priority order, treating it not as an IT issue but as an organisation wide issue.

Travel issues

Any event I attend cannot be without its travel issues and this event was no different.   Firstly, it was the short stay carpark, a nice 2 or 3 mins walk from the departure terminal, being closed leading to a longer and unplanned 15 to 20 mins walk to be met with a snaking long queue for security.  Panic ensued however thankfully the Bristol security team were efficient and quickly saw us all through.   Once in Amsterdam I couldn’t check into the hotel when I arrived leading to having to attend the conference a little dishevelled from travel and rushing around.

And on the homeward bound leg it was plane delays and an amusing moment at the gate as shown on my app, looking at a plane at the gate but unable to get to that plane, as the plane slowly backed away from the gate;  Myself and another passenger wondered if that was our plane and we had missed it but thankfully it wasn’t our plane, and our plane was still 1 hour from arriving.   And once back in the UK, exiting airport parking, my ticket wouldn’t work to open the barrier so I pressed for help.  The helpful voice took some details and then went quiet.  After about 10 minutes sat waiting I pressed for help once more and got the same helpful voice who for reasons unknown had forgotten about me;  The barrier then promptly raised and I exited the carpark, promptly turning down the wrong exit from a roundabout and nearly going back into the same carpark.   A little three point turn and I finally was heading in the correct direction.

Conclusion

It was a very worthwhile trip, catching up with so many great people and also watching and listening to a number of really useful and informative sessions.  It was also nice to listen to a broader range of speakers from across Europe rather than just UK only as is common in UK based events.   This made for a richer discussion including discussion of education within a variety of different national and regional, and stage, related contexts.   As always the network side of things was a key benefit of the event and I look forward to some follow up from some of the really interesting and exciting discussions and plans that were created.

AI risks and challenges continued

This is my 2nd post following on from my session speaking on AI in education at the Embracing AI event arranged by Elementary Technology and the ANME in Leeds last week.   Continuing from my previous post I once again look at the risks and challenges of AI in education rather than the benefits, although I continue to be very positive about the potential for AI in schools and colleges, and the need for all schools to begin exploring and experimenting.

Homogeneity

The discussion of AI is a broad one however at the moment the available generative AI solutions are still rather narrow in their abilities.   The availability of multi-modal generative AI solutions is a step forward but the solutions are still rather narrow, being largely focussed on a statistical analysis of the training data to arrive at the most probable response, with a little randomness thrown in for good measure.     As such, although the responses to a repeated prompt may be different, taken holistically they tend towards an average response and here in lies a challenge.   If the responses from generative AI tend towards an average response and we continue to make more and more use of generative AI, wont this result in content, as produced by humans using AI, regressing to the mean?   And what might this mean for human diversity and creativity?    To cite an example, I remember seeing on social media an email chain where an individual replied asking the sender to not use AI in future, to which the sender replied, I didn’t use AI, I’m neuro-diverse. What might increasing AI use mean for those who diverge from the average, and what does it even mean to be “average”?

Originality

The issue of originality is a big one for education.   The JCQ guidelines in relation to A-Level refer to the need for “All coursework submitted for assessment must be the candidate’s own work” but what does this mean in a world of generative AI?   If a student has difficulty working out how to get started and therefore makes use of a generative AI solution to get them started, is the resultant work still their own work?   What about a student who develops a piece of work but then, conscious of their SEN needs and difficulties with language processing, asks a generative AI solution to read over the content and correct any errors, or maybe even improve the readability of the piece of work;  Is this still the students own work?   Education in general will need to seek to address this challenge.   The fact is that we have used coursework assessment evidence as a proxy for evidence of learning for some time, however we may now need to rethink this given the availability of the many generative AI solutions which are now so easily accessible.    And before I move on I need to briefly mention AI and plagiarism detection tools;   They simply don’t work with any reliability, so in my view, shouldn’t be used.   I don’t think there is much more that needs said about such tools, other than that.

Over-reliance

We humans love convenience however as in most, if not all things, there is a balance to be had and for every advantage there is a risk or challenge.   As we come to use AI more and more often due to the benefits we may become over-reliant on it and therefore fail to consider the drawbacks.   Consider the conventional library based research;  When I was studying, pre-Google, you had to visit a library for resources and in doing so you quite often found new sources which you hadn’t considered, through accidentally picking out a book or through using the reference list in one book, leading to another book, and onwards.   The world of Google removed some of this as we could now conveniently get the right resources from our prompts.   Google would return lists of sources but how many of us went beyond the first page of responses?    Now step in generative AI which will not only provide references but can actually provide the answer to an assignment question.    But the drawback is Google (remember Google search uses AI) and now Generative AI may result in a reduction in broader reading and an increasingly reliance on the google search or generative AI response.   Possibly over time we might become less able, through over-use, to even identify when AI provides incorrect or incomplete information.   There is a key need to find an appropriate balance in our use of AI, balancing its convenience against our reliance.

Transparency and ethics

Another issue which will likely grow in relation to AI is that of transparency and of ethics.    In terms of transparency, do people need to know where an AI is in use and to what extent it is used.   Consider the earlier discussion of student coursework and it is clear that students should be stating where generative AI is used, but what about a voice based AI solution answering a helpline or school reception desk;   Does the caller need to know they are dealing with an AI rather than a human?   What about the AI in a learning management platform;  How can we explain the decisions made by the AI in relation to the learning path it provides a student?  And if we are unable to explain how the platform directs the students and therefore are unable to evidence whether it may be positively or negatively impacting the student, is it therefore ethical to use the platform?   The ethical question itself may become a significant one, focusing not on how we can use AI but on should we be using it for a given purpose.     The ethics of AI are likely to be a difficult issue to unpick given the general black-box nature of such solutions although some solutions providers are looking at ways to surface the inner workings of their AI solutions to provide more transparency and help answer the ethical question.   I however suspect that most vendors will be focussed on the how of using AI as this drives their financial bottom line.   The question of whether they should provide certain solutions or configure AI in certain ways will likely be confined to the future and the post mortem resulting from where thing go wrong.

Conclusion

As I said at the outset I am very positive about the potential for AI in education, and beyond, but I also believe we need to be aware and consider the possible risks so we can innovate and explore, but safely and responsibly.

   

AI risks and challenges

I once again had the opportunity to speak in relation to AI in education earlier in the week, this time at the Elementary Technology and ANME event in Leeds.   Now this time my presentation was very much focussed on the risks and challenges of AI in education rather than the benefits, leaving the benefits and some practical uses of AI to other presenters and to the workshop style sessions conducted in the afternoon.    This post marks the first post of two looking at the risks and challenges I discussed during my session.

Bias

The potential for Bias in AI models and in particular in the current raft of Generative AI solutions was the first of the challenges I discussed.   In order to illustrate the issue I made use of Midjourney asking it separately for a picture of a nurse in a hospital setting and then for a picture of a doctor, in both cases not stating the gender and allowing the AI to infer this.   Unsurprisingly the AI produced 4 images of a female nurse and 4 images of a male doctor easily demonstrating an obvious gender bias.   Now for me the bias here is obvious and therefore easily identified and corrected through an appropriate prompt asking for a mix of genders, but such bias are not always so identifiable.   What about the potential for bias in learning materials presented to a student via an AI enabled learning platform, or the choice of text returned to a student by a generative AI solution?   And if we cant identify the bias how are we to address it?    I will however note at this point we also have to consider human bias, as it is unfair to expect an AI solution to be without bias when we developed the solution, provided the training data, etc, and we are not without bias ourselves.

Data Privacy

Lots of individuals, including myself, are already providing data to AI solutions but do we truly know how this data will be used, who it might be shared with, what additional data might be inferred from it, etc, and we need to know this now as it is currently, but also the future intentions of those we provide data to.   The DfE makes clear that school personal data shouldn’t be provided to generative AI solutions however what if attempts are made to pseudonymize the data;  What level of pseudonymization is appropriate?   And then there is the issue of inferred data;  I recently heard the suggestion that, if we fed all of our AI prompts back into an AI solution and asked it to provide a profile for the user, it would do a reasonable job of the task possibly identifying age, work sector and more.    AI and generative AI offer a massive convenience, efficiency and speed gain however the trade off is giving more data away;  Is this a fair trade off and one which we are consciously accepting?

Hallucinations

The issue of AI presenting made up information was another one which I found easy to recreate.   I note this is often referred to a “hallucination” however I am not keen on the term as it anthropomorphises the current generative AI solutions we have when I still believe the solutions we currently have are still narrow in terms of their focus and therefore more akin to Machine Learning, a subset of the broader AI technologies.   To demonstrate this issue I used a solution we have been working on which helps teachers generate parental reports, putting a list of teacher provided strengths and areas for improvement into readable sentences which teachers can then review and update.    We simply failed to provide the AI with any strengths or areas for improvement.   The AI however still went on to produce a report however in the absence of any teacher provided strengths or areas for improvement, it simply made them up.    For me this highlights the fact that AI solutions cannot be considered as a replacement for humans, but instead are a tool or assistant to humans.

Cyber

The issue of cyber security or Information security and AI is quite a significant one from a variety of different perspectives.    First there is the potential use of AI in attacks against organisations including schools.   The existence of criminally focussed generative AI tools has already been reported in WormGPT and FraudGPT.    Generative AI makes it easy to quickly create believable emails or usable code, independent of whether the purpose of the code is benign or if it for a phishing email or malware.    Additionally there is the issue of AI as a new attack surface which cyber criminals might seek to leverage.   This might be through the use of prompt injection to manipulate the outputs from AI solutions, possibly providing fake links or validating organisations or posts which are malicious or fictious.   Attacks could also involve poisoning of the AI model itself, such that the models behaviour and responses are modified to suit the malicious ends of an attacker.   And these are only a couple of implications in relation to AI and cyber security.

Conclusion

I think it is important to acknowledge that my outlook on AI in general and in education is a largely positive one, however I think it is important that we are realistic and accept the existence of a balance; a balance between the benefits and the risks and challenges, where to make use of the benefits we need to be at least aware and consider the possible balancing drawbacks and risks. This post therefore is about making sure we are aware of the risks, with my next post digging into a few further risks and challenges.

As Darren White put it in his presentation at the Embracing AI event, “Be bold but be responsible”

Using AI: Preparing a conference presentation.

Last week I presented at a conference, speaking about AI in education, so what better way to create the presentation than to actually use AI tools.   So, I thought I would share some experiences of the process.

The main tool I made use of in preparing my presentation was Canva which I became aware of after seeing Darren White do a short demo of it at a meeting a couple of months ago.    Canva allowed me to get the ball rolling quickly and easily, using their Magic Create functionality to create the bare bones of my presentation including some nice graphics with the only requirement from my end being a single sentence as a prompt.

Now, the presentation needs to be something I was happy delivering, something that includes a bit of my identity, experience and outlook.   The Canva AI generated presentation, although it included a simple structure with some key points, it just wasn’t me.    But it did give me a good starting point including graphics after maybe 1 or 2 minutes of effort as opposed to the ½ hour it would likely have taken me to get to that point.

At this stage I set about moving slides around and adding new slides to build a structure for the session which felt a bit more like me and something I would present.  I note, I could have possibly refined my prompt and worked at it that way, however for me it was easier to work directly with the slides as I sought to align them with the thinking in my head, where sometimes it wasn’t the slides on the screen which were being reordered but the order in my head.    As I continued to work on this the presentation started to take shape.    Finding graphics and images for the slide was easy using Canva’s search tools, and from there it was easy to drop images straight into my presentation, and where the images werent quite right I could easily change them using the AI image editing tools in Canva.   I could easily remove elements of an image or change elements at will in order to get me the image I needed, which best suited the slide I was working on.

Additionally I made a little use of MidJourney and DALL-E2 to generate additional impacts, plus used ChatGPT for the development of additional text content and some of my script.   As with most technology usage, it was switching between different generative AI tools for different purposes and I suspect, I could have used even more apps if I felt appropriate;   I note I suspect the core of Canva, ChatGPT, MidJourney, DALL-E and maybe Bard should be good enough for most possible purposes.

Did the AI tools do the job for me?  

No, I was looking to create a presentation, where I would be presenting my thoughts and ideas.   Generative AI doesn’t (yet) have access to my thinking, where this thinking was constantly changing and evolving, with me refining my message as I built my presentation.    What generative AI does provide however is tools to make things easier, quicker and more efficient, so I could create the bare bones of a presentation in a couple of minutes rather than 30 minutes.   I could find images and quickly insert in moments rather than spending time searching via google or image tools, plus I could easily change images to suit my needs including changing their composition, all taking minutes rather than the hours this might have taken me in the past manipulating images in Photoshop.

Generative AI is a powerful tool to help me do the basics quickly allowing me to spend more time making the presentation I was creating a reflection of me, of a human being with experience, skills and a personal, albeit often changing, outlook on the world, on education and on technology.

Conclusion

Now I hope the presentation was well received but only the feedback will tell me that, although it did seem to go reasonably well.  I suspect through the use of generative AI tools I spent less time on the actual slide designs and more time on the actual content of the session and on what I was going to say.   Hopefully this made for a more engaging session.    I think the key takeaway as that AI, as it is now, doesn’t do things for you, it isnt close to replacing us humans, but it can make us more effective and efficient.  It makes me think back to that old quote about teachers and tech;   Technology wont replace teachers but teachers that use tech will replace those who do not.    In the world of generative AI the word “technology” can be replaced by what I believe to be one of the most disruptive technologies we have seen an decades;  AI.    The question therefore is how do we ensure the disruption is to the betterment of us as individuals, as groups and organisations, and society as a whole.  How do we use and work with AI, while being aware and conscious of any risks or drawbacks?