Cyber Awareness Month: Cyber threats

October is cyber awareness month and an important opportunity to discuss and highlight cyber security and cyber threats.   Now cyber security and particularly the development of a culture of positive cyber security practices is an ongoing  requirement, however cyber awareness month provides a valuable chance to highlight cyber security and ensure it is the subject of discussion.    Due to this I would briefly like to share some of my thoughts in relation to the main cyber threats as they current exist for schools and colleges.

Phishing, vishing  and other “ishing” attacks.

For me, phishing and similar attacks based on SMS, messaging services, social media, phone calls and even malicious QR codes continue to be one of the most common attacks aimed either at compromising a user account or at compromising a target machine through malware.   One of the big issues here is that we ae living in an increasingly busy world dealing with ever increasing numbers of emails, messages, etc.   And in this busyness it is “human to err”, to click a malicious link, to reply to a malicious email or provide user credentials to a convincing looking, but fake, login page.    Continued user awareness training can help in this area, making users more aware of the signs to look for in malicious messaging, but it can only go so far especially as people are becoming increasingly busy.   For me, the key is for users just to have a fraction more time to review messages before acting, giving their conscious brain just that bit more time to engage and identify the unusual features of a malicious email, message or call.    I am not talking about huge amounts of time, only fractions of a second.   That said this time needs to come from somewhere in a time bounded world so we are going to need to make some compromises to fine this time as otherwise we are only likely to see data breaches resulting from phishing and other “ishing” style attacks becoming both more common and more significant in their impact.

Third parties

We are increasingly using more and more third parties, including online tools, in our lives and in our schools, whether this is a cloud hosted MIS, a learning platform, quizzing app, website provider or a multitude of other solutions providers.   In each third party there is an additional risk.   And this risk is two-fold.    One part relates to an incident on this third party resulting in school data being breached, where the school as data controller, remains responsible.    The other part of this issue relates to the use of a third party to gain access to a schools systems, possibly through a business email compromise attack having gained access to a compromised email account within a third party, or it could involve using integration between the third parties solution and school systems.   Either way, I see third parties as the 2nd most significant risk which schools are exposed to.   Due diligence is key here in terms of ensuring appropriate checks are done on vendors in terms of their approach to security, etc, although I note these are often only superficial in nature in the information third parties may provide via their policies or via response to direct queries.   I suspect the other solution is simply least privilege and both limiting the access of third parties to school systems, plus in trying to limit the total number of third parties used.   Sadly this is often easier said than done.

Conclusion

Given the above as to the two main risks as I see them, and the acceptance that a cyber incident is a matter of a when rather than an “if” scenario, it therefore makes sense to play out the above scenarios as desktop exercise to consider how your school might respond.    Phishing can also be easily tested for through the use of a phishing test campaign, sending out a fake phishing email to see how users respond.   I would suggest in both of the above scenarios there isnt a huge amount schools can do to prevent an incident, although I will once again state the importance of doing the basics in terms of cyber such as using MFA, patching, least privilege, taking and testing backups and performing regular user awareness training.   So, if there is limited opportunities for preventative measures beyond the basis, then the key thing is to prepare for the most likely threat scenarios.   How would you respond to a compromised user account resulting in MIS data being exfiltrated for example or to a third party data solution suffering a data breach resulting in school data being leaked publicly?   Would police be involved?  What would you tell the press, parents and the wider community?   How would your school respond internally, including who would be involved in discussions around actions and who would have the authority needed to approve comms, etc, plus what roles would each person undertake?   And how might you deal with wellbeing and mental health during a high stress incident?   

It is better to consider these and other questions now, than waiting and having to answer them during an incident.    And maybe this is one aspect of cyber awareness month we neglect;  It isnt just about preventative measures and reducing the likelihood of an incident, it is also about acceptance that incidents will happen and therefore spending some time planning and preparing.

TEISS London 2023: Reflections

During September I managed to find myself in two industry level cyber/info security conferences, one of which I have already blogged about (See here).   This post focusses on the other event, being the TEISS London 2023 event which was a little more focussed on incident management rather than the previous event which was a little more generic.   So, what were my take-aways as relevant to education?

Incident Response

One of the key discussions across this particular event was in relation to the inevitable cyber incident and therefore the need to prepare.    Discussions arose around desktop exercises, the development of incident response playbooks and disaster recovery plans.    The key take-away for me was in the need to play through potential cyber incidents and to do this regularly.   We are not talking about once every few years, but as often as can be managed so that the relevant staff, both senior and IT technical, know how to respond when the inevitable issue arises.    It was also discussed, the need to carry out these desktop exercises with different groups of individuals in order to ensure that all are prepared.   Desktop exercising is definitely something I want to look towards repeating in the coming months, and building a process so that it doesn’t occur ad-hoc but more as part of  a regular process allowing for the review and improvement of the related processes with each test.

Concerning external factors

One of the presenters went into the risks associated with geopolitical issues, where issues in the geopolitical space often result in corresponding issues in the cyber arena.  From a schools point of view it is easy to wonder why this makes a difference;  Why would a nation state or similar focus on education?    I think the issue here is not so much an attacker focussing on education, but on the collateral damage which might impact education.   Now this collateral damage might be accidental however we also need to acknowledge the increasing use of cloud services;  This often means data and services hosted in various countries across the world so what is the potential risk where countries have disagreements and where some aggressive activity online results.   It is easy to say your school exists in Europe or the UK so this is unlikely however the presenter demonstrated some  aggressive cyber activity even within the UK and EU, so it therefore isnt unpredictable that this may happen again in the future.    For schools this means, as far as I am concerned, that we need to continue to do the basics plus prepare to manage an incident when it occurs.

Artificial Intelligence

AI once again factored in the discussion however at least one presenter suggested that where we are now is more akin to Machine Learning than AI.   I suspect this depends on your definition of both terms, with my definition having ML as a subset of AI.    The key message here was that the current instance of AI, generative AI, presents rather generic responses but quickly.   Its benefit, whether used for defence or attack, is its speed and ability to ingest huge amounts of data, however it is only in pairing with a human that it progresses beyond being “generic”.   In the future this may change, as we approach the “singularity” however for now and the near future AI is an assistant for us and for criminals, but doesn’t represent a significant innovative change in relation to cyber security;  good security with AI is little different to good security prior to generative AI.

Human Factors

The human factor and culture were a fair part of the discussion.    The cyber culture and “the way we do things around here” in relation to information security is key.   We need to build safe and secure practices into all we do and at all levels;  Easier to say than it is to do.    This also links to the fact that humans, and the wider user group which in schools would be students, staff, parents, visitors and contractors among others, continue to be involved in around 74% of breaches.   This means it is key that cyber security awareness training needs to hit all of these users and be regular rather than a once a year.    Additionally, if we assume we will suffer a cyber incident, how do we protect our IT staff and also those senior staff involved in incident response and management.   The stress levels will be very high, and as a result self-care may be lacking, but schools and other organisations have a duty of care for their staff, and during a cyber incident that duty of care may become all the mor important.   This is why, in my team anyway, I am introducing a role of “chief wellbeing officer” as part of our incident response plans.

Conclusion

The organisations at this particular event, similar to the previous cyber event, were generally large corporate entities yet for me the messaging may be all the more important for schools given we hold student data and student futures in our hands, and given the targeting of educational institutions.  How do we get more schools to attend these events?    I suspect events like these fall into the important but not urgent, where fixing a server issue or a device issue in a classroom is urgent and important, but then how do we ensure that school IT staff are prepared and preparing for cyber incidents?   Chicken or the egg issue maybe?   

Cyber incidents are inevitable and I have always said that “the smartest person in the room is the room” so if we can share with industry where I believe they have much more experience in this arena, then maybe we, as in schools, will be all the better for it.

Digital Citizenship

Its digital citizenship week this week so I thought I would share some thoughts. Now, I have discussed and raised the issue of the need for more time in schools to discuss digital citizenship.   Whether it is discussing the increasing need to be aware of cyber risks, or the increasing amount of data we are now sharing online or the increasing risk of our behaviours being influenced and manipulated by the tech tools we use, they all need discussion.    Schools and colleges are looking to prepare students for the uncertain, but clearly digital futures they face, but still the focus is on narrow coverage of “online safety” when the risks now extend way beyond the content being covered.

And all of this is before generative AI made its appearance and became so publicly available late in 2022.  Suddenly fake news is much easier to accomplish through generative AI tools that can easily modify content in terms of the video or audio, both being quick to achieve and also to achieve convincingly.    Suddenly the phishing emails which were often laden with spelling errors or design issues, can be fed through a generative AI solution such that the resultant output is convincing in its styling plus free from grammar and spelling errors.   In terms of influencing people through social media, generative AI allows for content creation to be automated with each piece of content being “unique” but with the common influencing message, far quicker than was possible previously.    We also have the issue, that as we all start to use more and more AI, such as the excellent generative AI tools available, we leak yet more data online, where the generative tools online are more powerful than ever in inferring yet further data.   At an event I attended recently it was suggested that if you fed your prompts from generative AI back into a generative AI solution and asked it to profile you it would do decent job of working out things like age, career, education, etc just based in the info you already put into generative AI tools.

So maybe post the free availability of ChatGPT and subsequently of so many other AI tools, or tools where generative AI such as ChatGPT is embedded, it becomes all the more important to discuss digital citizenship with our students.   And maybe generative AI, if it frees educators up from the more administrative and basic tasks of education, provides both the issue and the solution.  Maybe if generative AI and the AI solutions yet to come free us up from the mundane and the basic, maybe it will finally provide time and resources to cover digital citizenship at a time where it may be all the more important.

The path of the world is towards increasingly digital lives with the pace of digital technology advancement being quick.   Regulation and governance is slow by comparison leaving us with a need to fill the void.   I don’t have the answers for the future although I am positive as to the potential of technology to aid, enhance and even redefine our lives, however with this there is always a balance and therefore risks and challenges.   This is where digital citizenship in schools comes in, in providing opportunities for the risks and challenges, both current and potential future risks and challenges, to be discussed and explored.   We need to develop students who are aware and questioning of technology implications, rather than students who blindly adopt technology without consideration for the future.   I believe we have a long way to go to address this issue but every step, every additional discussion, every assembly, every lesson including reference to digital citizenship being an additional step in the right direction.

Image courtesy of Midjourney

EduTech Europe 2023

Its been a while since I have had to fly out to present at a conference, with the last time being almost 10 years ago flying to Kuwait from the UAE to present, but recently I found myself in Amsterdam presenting a cyber session at the EduTech Europe event.    I suppose this means I can claim to be an international speaker for all that might be worth!   In terms of the event itself I found it to be very useful indeed so I thought I would, as I have for other events, share my thoughts.

Education and Disruption

There was a fair contingent of UK EdTech experts and gurus at the event and it was great to catch up with many of them and to watch their various sessions.   This continues to be one of the big reasons for events like this, in the networking and opportunity to share thoughts and ideas, however I think EduTechEurope did particularly well at this as there seemed to be more time to allow for discussion.

Of particular note was some discussion with Gemma Gwilliam and Emma Darcy in relation to the education system as it currently exists.   Emma referred to “brave leadership” in her session, in response to a question from the floor relating to how the current curriculum doesn’t prepare students for the digital world which exists and which lies ahead.   This struck me as highlighting that those schools seeking to do the right thing for their students are often having to break away from the established education system.   In Emma’s case one aspect of this was re-imaging the school day and timetable to make time available for digital and the things that matter even when these are not within the curriculum or something the current education system seeks to develop or assess.   Over lunch on day two, Emma, Gemma and myself had a really interesting discussion as to how we as a group, along with some others, might seek to support the “breaking” of education through constructive disruption.    I left the event feeling energised and excited from the discussions and look forward to sharing the progress we as a group make over the coming months, and possibly ahead of EduTechEurope 2024.

Digital Strategy

Digital strategy in schools has been discussed often over the last 5 or 10 years so isnt something new.   The pandemic also brought the importance of digital solutions to the forefront further stimulating the discussion however it was refreshing to hear discussion of an often forgotten aspect in relation to wellbeing.   Technology allows us to do more and to be more efficient and quicker, but does this “doing more” have a negative impact on wellbeing and on staff workload?   The wellbeing aspect of digital strategy is something we need to explore much more, as is the challenge regarding the “additive” approach to education which has seen us forever seeking to get better, which is fair and enviable, but at the expense of increasing workload and challenges around wellbeing and mental health.

AI in education

Laura Knight delivered her usual high quality and thought provoking session, this time on AI in education.   She explored the benefits and challenges in relation to AI which is something I myself have explored in some of my recent presentations.   The most interesting part of her presentation for me though was her discussion on agility and education.    This is something I see as a key challenge given education has changed little over the last few centuries, albeit the whiteboard has been replaced by a projector or panel.   This is against a backdrop of rapid technological development.    We have therefore needed to re-evaluate our current education system for some time, including how we assess students, and how students progress through formal education.   It may be that AI will now prove as a potential catalyst to make this change happen.   It is also likely that in the first instance we will need the “brave leadership” and the positive disruption from a small number of schools to lead the way for schools and colleges in general.

Cyber Resilience

The panel session I was involved in looked at “cyber-proofing your school” and it was great to be on such a diverse panel chaired by Abid Patel.Some really good discussion with two particular notable points for me. One being the fact that schools dont and will not ever have the finances and other resources to cyber “proof” our schools so we can only focus on doing the basics and preparing for an incident. The second take away for me was the fact the session was the last session of day 1, with this often being the case for cyber security awareness sessions, given limited time and placed at the end of an inset day; We need to realise that the potential impact of a cyber incident means we need to move cyber up the pecking and priority order, treating it not as an IT issue but as an organisation wide issue.

Travel issues

Any event I attend cannot be without its travel issues and this event was no different.   Firstly, it was the short stay carpark, a nice 2 or 3 mins walk from the departure terminal, being closed leading to a longer and unplanned 15 to 20 mins walk to be met with a snaking long queue for security.  Panic ensued however thankfully the Bristol security team were efficient and quickly saw us all through.   Once in Amsterdam I couldn’t check into the hotel when I arrived leading to having to attend the conference a little dishevelled from travel and rushing around.

And on the homeward bound leg it was plane delays and an amusing moment at the gate as shown on my app, looking at a plane at the gate but unable to get to that plane, as the plane slowly backed away from the gate;  Myself and another passenger wondered if that was our plane and we had missed it but thankfully it wasn’t our plane, and our plane was still 1 hour from arriving.   And once back in the UK, exiting airport parking, my ticket wouldn’t work to open the barrier so I pressed for help.  The helpful voice took some details and then went quiet.  After about 10 minutes sat waiting I pressed for help once more and got the same helpful voice who for reasons unknown had forgotten about me;  The barrier then promptly raised and I exited the carpark, promptly turning down the wrong exit from a roundabout and nearly going back into the same carpark.   A little three point turn and I finally was heading in the correct direction.

Conclusion

It was a very worthwhile trip, catching up with so many great people and also watching and listening to a number of really useful and informative sessions.  It was also nice to listen to a broader range of speakers from across Europe rather than just UK only as is common in UK based events.   This made for a richer discussion including discussion of education within a variety of different national and regional, and stage, related contexts.   As always the network side of things was a key benefit of the event and I look forward to some follow up from some of the really interesting and exciting discussions and plans that were created.

AI risks and challenges continued

This is my 2nd post following on from my session speaking on AI in education at the Embracing AI event arranged by Elementary Technology and the ANME in Leeds last week.   Continuing from my previous post I once again look at the risks and challenges of AI in education rather than the benefits, although I continue to be very positive about the potential for AI in schools and colleges, and the need for all schools to begin exploring and experimenting.

Homogeneity

The discussion of AI is a broad one however at the moment the available generative AI solutions are still rather narrow in their abilities.   The availability of multi-modal generative AI solutions is a step forward but the solutions are still rather narrow, being largely focussed on a statistical analysis of the training data to arrive at the most probable response, with a little randomness thrown in for good measure.     As such, although the responses to a repeated prompt may be different, taken holistically they tend towards an average response and here in lies a challenge.   If the responses from generative AI tend towards an average response and we continue to make more and more use of generative AI, wont this result in content, as produced by humans using AI, regressing to the mean?   And what might this mean for human diversity and creativity?    To cite an example, I remember seeing on social media an email chain where an individual replied asking the sender to not use AI in future, to which the sender replied, I didn’t use AI, I’m neuro-diverse. What might increasing AI use mean for those who diverge from the average, and what does it even mean to be “average”?

Originality

The issue of originality is a big one for education.   The JCQ guidelines in relation to A-Level refer to the need for “All coursework submitted for assessment must be the candidate’s own work” but what does this mean in a world of generative AI?   If a student has difficulty working out how to get started and therefore makes use of a generative AI solution to get them started, is the resultant work still their own work?   What about a student who develops a piece of work but then, conscious of their SEN needs and difficulties with language processing, asks a generative AI solution to read over the content and correct any errors, or maybe even improve the readability of the piece of work;  Is this still the students own work?   Education in general will need to seek to address this challenge.   The fact is that we have used coursework assessment evidence as a proxy for evidence of learning for some time, however we may now need to rethink this given the availability of the many generative AI solutions which are now so easily accessible.    And before I move on I need to briefly mention AI and plagiarism detection tools;   They simply don’t work with any reliability, so in my view, shouldn’t be used.   I don’t think there is much more that needs said about such tools, other than that.

Over-reliance

We humans love convenience however as in most, if not all things, there is a balance to be had and for every advantage there is a risk or challenge.   As we come to use AI more and more often due to the benefits we may become over-reliant on it and therefore fail to consider the drawbacks.   Consider the conventional library based research;  When I was studying, pre-Google, you had to visit a library for resources and in doing so you quite often found new sources which you hadn’t considered, through accidentally picking out a book or through using the reference list in one book, leading to another book, and onwards.   The world of Google removed some of this as we could now conveniently get the right resources from our prompts.   Google would return lists of sources but how many of us went beyond the first page of responses?    Now step in generative AI which will not only provide references but can actually provide the answer to an assignment question.    But the drawback is Google (remember Google search uses AI) and now Generative AI may result in a reduction in broader reading and an increasingly reliance on the google search or generative AI response.   Possibly over time we might become less able, through over-use, to even identify when AI provides incorrect or incomplete information.   There is a key need to find an appropriate balance in our use of AI, balancing its convenience against our reliance.

Transparency and ethics

Another issue which will likely grow in relation to AI is that of transparency and of ethics.    In terms of transparency, do people need to know where an AI is in use and to what extent it is used.   Consider the earlier discussion of student coursework and it is clear that students should be stating where generative AI is used, but what about a voice based AI solution answering a helpline or school reception desk;   Does the caller need to know they are dealing with an AI rather than a human?   What about the AI in a learning management platform;  How can we explain the decisions made by the AI in relation to the learning path it provides a student?  And if we are unable to explain how the platform directs the students and therefore are unable to evidence whether it may be positively or negatively impacting the student, is it therefore ethical to use the platform?   The ethical question itself may become a significant one, focusing not on how we can use AI but on should we be using it for a given purpose.     The ethics of AI are likely to be a difficult issue to unpick given the general black-box nature of such solutions although some solutions providers are looking at ways to surface the inner workings of their AI solutions to provide more transparency and help answer the ethical question.   I however suspect that most vendors will be focussed on the how of using AI as this drives their financial bottom line.   The question of whether they should provide certain solutions or configure AI in certain ways will likely be confined to the future and the post mortem resulting from where thing go wrong.

Conclusion

As I said at the outset I am very positive about the potential for AI in education, and beyond, but I also believe we need to be aware and consider the possible risks so we can innovate and explore, but safely and responsibly.

   

AI risks and challenges

I once again had the opportunity to speak in relation to AI in education earlier in the week, this time at the Elementary Technology and ANME event in Leeds.   Now this time my presentation was very much focussed on the risks and challenges of AI in education rather than the benefits, leaving the benefits and some practical uses of AI to other presenters and to the workshop style sessions conducted in the afternoon.    This post marks the first post of two looking at the risks and challenges I discussed during my session.

Bias

The potential for Bias in AI models and in particular in the current raft of Generative AI solutions was the first of the challenges I discussed.   In order to illustrate the issue I made use of Midjourney asking it separately for a picture of a nurse in a hospital setting and then for a picture of a doctor, in both cases not stating the gender and allowing the AI to infer this.   Unsurprisingly the AI produced 4 images of a female nurse and 4 images of a male doctor easily demonstrating an obvious gender bias.   Now for me the bias here is obvious and therefore easily identified and corrected through an appropriate prompt asking for a mix of genders, but such bias are not always so identifiable.   What about the potential for bias in learning materials presented to a student via an AI enabled learning platform, or the choice of text returned to a student by a generative AI solution?   And if we cant identify the bias how are we to address it?    I will however note at this point we also have to consider human bias, as it is unfair to expect an AI solution to be without bias when we developed the solution, provided the training data, etc, and we are not without bias ourselves.

Data Privacy

Lots of individuals, including myself, are already providing data to AI solutions but do we truly know how this data will be used, who it might be shared with, what additional data might be inferred from it, etc, and we need to know this now as it is currently, but also the future intentions of those we provide data to.   The DfE makes clear that school personal data shouldn’t be provided to generative AI solutions however what if attempts are made to pseudonymize the data;  What level of pseudonymization is appropriate?   And then there is the issue of inferred data;  I recently heard the suggestion that, if we fed all of our AI prompts back into an AI solution and asked it to provide a profile for the user, it would do a reasonable job of the task possibly identifying age, work sector and more.    AI and generative AI offer a massive convenience, efficiency and speed gain however the trade off is giving more data away;  Is this a fair trade off and one which we are consciously accepting?

Hallucinations

The issue of AI presenting made up information was another one which I found easy to recreate.   I note this is often referred to a “hallucination” however I am not keen on the term as it anthropomorphises the current generative AI solutions we have when I still believe the solutions we currently have are still narrow in terms of their focus and therefore more akin to Machine Learning, a subset of the broader AI technologies.   To demonstrate this issue I used a solution we have been working on which helps teachers generate parental reports, putting a list of teacher provided strengths and areas for improvement into readable sentences which teachers can then review and update.    We simply failed to provide the AI with any strengths or areas for improvement.   The AI however still went on to produce a report however in the absence of any teacher provided strengths or areas for improvement, it simply made them up.    For me this highlights the fact that AI solutions cannot be considered as a replacement for humans, but instead are a tool or assistant to humans.

Cyber

The issue of cyber security or Information security and AI is quite a significant one from a variety of different perspectives.    First there is the potential use of AI in attacks against organisations including schools.   The existence of criminally focussed generative AI tools has already been reported in WormGPT and FraudGPT.    Generative AI makes it easy to quickly create believable emails or usable code, independent of whether the purpose of the code is benign or if it for a phishing email or malware.    Additionally there is the issue of AI as a new attack surface which cyber criminals might seek to leverage.   This might be through the use of prompt injection to manipulate the outputs from AI solutions, possibly providing fake links or validating organisations or posts which are malicious or fictious.   Attacks could also involve poisoning of the AI model itself, such that the models behaviour and responses are modified to suit the malicious ends of an attacker.   And these are only a couple of implications in relation to AI and cyber security.

Conclusion

I think it is important to acknowledge that my outlook on AI in general and in education is a largely positive one, however I think it is important that we are realistic and accept the existence of a balance; a balance between the benefits and the risks and challenges, where to make use of the benefits we need to be at least aware and consider the possible balancing drawbacks and risks. This post therefore is about making sure we are aware of the risks, with my next post digging into a few further risks and challenges.

As Darren White put it in his presentation at the Embracing AI event, “Be bold but be responsible”