What does the future of cyber look like for schools?

The question of this post is not an easy question to answer.   On one hand, if I show an optimistic viewpoint, I may be seen as downplaying the issues and the challenges which impact schools.  On the other hand, if I am pessimistic, I may be seen as portraying a no-win scenario, a scenario so bleak that it doesn’t really bare thinking about.   So, I am going to do my best to thread the needle of this challenge and strike a balance between unrealistic optimism and nihilistic pessimism.

Increasing technology use

Schools are only going to make use of more and more technology as we seek to try and do more with less.   We seek efficiencies, we seek to solve a workload challenge, we seek to continually improve, and in all of this we will continue to make use of more and more technology.   And as we use more technology our technology footprint, our data footprint, the number of integrations and systems used, and our overall risk as related to technology use will only increase.   I find it difficult to see any other option.    My risk when I was younger and I used a standalone PC without internet connection, using a limited number of bits of software is less than today where I use multiple laptops and desktops, a mobile phone, home assistant, smart TV and other devices, complete with way more applications.   The direction of travel is undeniable.

Increasing ambient cyber risk

Additionally, the ambient risk of cyber incidents, whether the result of nation states, either directly or more commonly indirectly, whether due to the script kiddies in our schools or, and much more likely, the result of cyber criminal efforts to generate profit, the ambient risk will only continue to grow.  I have attended industry cyber conferences in consecutive years and this has been the message for a number of years, with this again likely to only continue.   And where there is an increasing technology use and the potential for criminal gains, which therefore are increasing over time it should be unsurprising that criminals will seek to grow and develop their technology focussed attacks, and therefore the general risk continues to grow.     Regulation and legislation helps little here as technology operates across national borders, so laws and penalties for mis-use just see criminal enterprises moving their efforts, resources or even themselves to nations which are more accepting of their activities or maybe even where they turn a blind eye.   This is also paired with the increasing focus on individual privacy in technology solutions even where this privacy is also applied to criminals such as those engaged in sharing child sexual abuse material.  Sadly, communications technology is easier secure or not, it cant be secure for some but not for others.

It’s all doom and gloom?

So, what are the positives in this story?   What balances out this negative picture?  It would be easy, at this point, to see only the negative, to feel hopeless in the face of ever-growing risk and ever-growing compliance requirements.      But we need to identify the benefits of the technology, the connectedness, convenience, benefits to creativity and problem solving, etc.    Today’s technology allows me to do way more than I was capable of with my standalone DX2 66Mhz PC from years gone past.   I can communicate further and faster, can create content which is more details, complex and creative, solve problems quicker and much more.

Maybe this is the issue, that when discussing cyber we focus too much on the negatives and take our eyes off the positives.   This can be very depressing indeed.    But, technology supports, encourages and enables so much of what we can now do and as with most things in life there is a balance to be struck.  Sadly, the counterbalance in this case is the cyber risk that is created.    Considering balance, we could easily seek to reduce the risk simply by using less technology but is this something we are really going to do?

So, what can we “reasonably” do?

This is the crux of the matter in how we can manage the risk, assuming that using less technology isnt an option.   The answer to this, for me, is to do the basic cyber security tasks like patching, creating and testing backups, managing and limiting user permissions, managing and limiting the data you store and how long you retain it for and developing user awareness regarding the risks.  There may be a need to prioritise here as schools may not have the resources to patch every server and every device however rather than focussing on the ideal and on what we haven’t or cannot do, we need to focus on what we have done;  Each additional device or server patched is one less vulnerable device and therefore a net reduction in the overall risk.   Every step, no matter how small, is a positive step.

It is also important to acknowledge that no matter what you do you will still suffer a cyber incident at some point in the future, so you need to prepare.   Key to this can be running a desktop exercise to check for assumptions or issues in your response plan plus to build up familiarity with the plan.   This should not be an IT only exercise as a cyber event equally is not an IT only event, it impacts the whole school.   As such stakeholder from across the school, leadership, teaching, IT should be all involved in the exercise and contributing their thoughts and ideas.    The desktop exercise is a useful tool and far less invasive than going around unplugged servers to see what people do!

Conclusion

So back to my initial question, what does the future of cyber look like for schools?    I think we will be continuing to do more and more with technology tools, being more creative, efficient, and interconnected, but this will sadly be balanced with an increasing cyber risk.   But it is a balance, and I think that is my answer, the future of cyber for schools looks like maintaining a balance.   In terms of managing this balance it will continue to be about doing all we reasonably can based on the resources we have, focusing on continually reviewing our cyber security posture and approach and making the continual little steps to reduce, or at least manage the risk.

It’s not a bleak, or an overly positive picture, but I think the above is a realistic and pragmatic picture!

Note: I avoided the overly simplistic picture of a person in a hoodie as my cyber criminal in this post;   As was pointed out to me recently, this stereotypical view, and lazy analogy is seldom helpful including in our discussions of cyber security or cyber crime!

AI and report writing

Workload is a growing concern for teachers in schools and therefore it is important that we seek solutions, with one of these solutions potentially being the use of AI.   One area where AI might help is in the writing of the reports sent to parents.   These reports which are often sent on a termly or even half termly basis can take significant time to write, and even more so where a teacher may has a large number of classes.   Now, before I go any further, lets be clear that what I am talking about is the use of AI to help teachers write the reports and not the use of AI to fully write the reports.   AI is good at some things such as consistency, objectivity, basic writing, however it lacks the humanistic side of things regarding relationships, perceived effort, motivation, etc, which a teacher brings to the mix.   As with a lot of applications of AI, I think the best can be had where it is AI combined with a human, maximising the strengths of each.

Feeding AI data

The key for AI report content is the data you provide along with the prompts directed at the Large Language Model.    From a data point of view we might simply seek to lift basic data already gathered and stored in the schools Management Information System (MIS).  This might include a score for effort, for homework, for behaviour, etc, plus a target and a current grade where this information is currently already gathered.    In my school we have experimented with this however the results feel a little bland given the relatively limited number of different permutations of the grades, plus the limited number of grade options.   To achieve more “personal” and individual reports requires more data however we need to balance this out with the resultant workload it might generate in terms of teachers having to gather and enter this data.

The approach used by www.teachmateai.com seems to provide a suggestion here in that its report generating solution asks teachers to input strengths and weaknesses.    Here the number of permutations jumps significantly as the options which are entered are only limited by a teachers imagination as to what constitutes a strength or a weakness.    Equally the data entry overhead needn’t be that significant.    I think back to teaching Btec qualifications some years ago and charting the achievement of the various grade descriptors so the students could see their progress and the areas they still need to work on.    A teacher could simply take this data or other data regarding the themes and topics covered, and enter this as the strengths and weaknesses, along with a couple of more individual comments per student and the resultant reports would appear reasonably personal to each student.

Data Protection

The DfE identified the risk associated with the creators of AI solutions sucking up huge amounts of data so data protection is something we need to consider in this process.   The DfEs own Generative AI in education (March 2023) guidance for example states:

“Generative AI stores and learns from data inputted. To ensure privacy of individuals, personal and sensitive data should not be entered into generative AI tools. Any data entered should not be identifiable and should be considered released to the internet”

So how do we generate student reports without entering personal data?    I think the key here is in ensuring the data provided isnt linked to an identifiable individual.   This aligns with GDPR where personal data relates to an identifiable living individual.   So if we anonymise the data, say by removing the name of the student before providing data to an AI, then we have reduced the risk given the actual student is not identifiable.   We can then add the correct name when we receive the response, the report, from the AI, with the full report then including the correct name.     This for me feels like the best approach however alternatively it would be possible to argue that providing a first name only, where first names would be often repeated, may also mean that the students are not individually identifiable and hence any risk is mitigated.   Either way it is for schools to consider the risk and make their decision accordingly, making sure to document this.

Example

I suppose the key where AI is helping with parental reports is, do they read well enough to be acceptable to parents so to that end I would like to provide an example based on data for a fictious student:

Sam demonstrates a solid performance in his History class. In lessons, he displays reasonably good engagement, and consistently produces work of a satisfactory quality for his grade range. Sam is thorough in completing his tasks and has great ideas. However, he is reluctant to get involved in some activities, which limits the extent of his engagement.

Would this pass your schools standards?    And remember it would be expected that the above would be read and adjusted by the relevant class teacher before going out.

Conclusion

For me, the use of AI to help with parental report writing seems like an easy win.   If it reduces the amount of time of required by teachers to create reports therefore allowing teachers to focus on other things, while still providing an appropriate and informative report for parents, then this is a good thing.

AI and AI and AI

Is AI a danger to education is a question I have recently explored, hopefully trying to present a balanced viewpoint.   This question however has an issue in that it asks about AI, as if AI was a simple item such as a hammer or a screwdriver.   The term AI covers a broad range of solutions, and as soon as you look at the breadth of solutions the question becomes difficult to answer, and in need of more explanation and context.  In effect, the question is akin to asking if vehicles are bad for the environment, without defining vehicles;  Is a bicycle for example bad for the environment?

[Narrow] AI

Although some may associate recent discussions of AI with ChatGPT and Bard, AI solutions have been around for a while, with most of us using some of these solutions regularly.    As I write this my word processor highlights spelling and grammar errors, as well as making suggestions for corrections.   The other day when using Amazon, I browsed through the list of “recommended for you” items which the platform had identified for me based on browsing and previous purchases.   I have made use of Google this morning to search for some content plus have used Google maps to identify the likely travel time for an event I am attending in the week ahead.   Also, when I sat down to use my computer this morning, I made use of biometrics in order to sign in plus used functionality in MS Teams in order to blur my background during a couple of Teams calls.   These are all examples of AI.   Are we worried about these uses?   No, not really, as we have been using them for a while now and they are part of normal life.   I do however note, that as with most things there are some risks and drawbacks but I will leave that for a possible future post.

The examples I give above are all very narrow focus AI solutions.   The AI has been designed for a very specific purpose within a very narrow domain area such as correcting spelling and grammar or identifying probable travel time or identifying the subject/individual on a Teams call then blurring everything which isnt the subject.   The benefits are therefore narrow to the specific purpose of the AI as are the drawbacks and risks.  But it is still AI.

[Generative] AI

Large language model development equally isnt new.  We might consider the ELIZA chatbot as the earliest example dating back to 1966, or if not, Watson dating to 2011.  Either way Large Language Models have been around in one way or another for some time, however ChatGPT, in my view, was a major step forward both in its capabilities but also in being freely available for use.   The key difference between narrow AI and Generative AI is in the fact Generative AI can be used for more general purposes.   You could use ChatGPT to produce a summary of a piece of text, to translate a piece of text, to create some webpage HTML, to generative a marketing campaign and many other purposes across different domains, with the only common factor being it produces text output from text-based prompts.   DALL-E and Midjourney do the same, taking text prompts, but producing images with similar solutions available for audio, video, programming code and much more.  

Generative AI as it is now, however doesn’t understand the outputs it produces.   It doesn’t understand the context of what it produces and it, when it doesn’t know the answer, may simply make it up or present incorrect information.   It has its drawbacks and it is still relatively narrow in terms of its limitations to taking text based prompts and responding based on the data it has been trained with.   It may be considered more “intelligent” than the narrow-focus AI solutions mentioned above but it is way short of human level intelligence, although it will outperform human intelligence in some areas.   It is more akin to dog like intelligence in its limited ability to preform simple repeated actions on request, taking a prompt, wading through the materials its been trained on, and providing an output, be this text, an image, a video, code, etc.   

A [General] I

So far, we have looked at AI as it exists now in narrow focussed AI and generative AI, however in the future we will likely have AI solutions which are closer to our human intelligence and can be used more generally across domains and purposes.    This conjures up images of Commander Data from Star Trek, R2-D2 from Star Wars, HAL from 2001 and the Terminator.   In each case the AI solutions are portrayed to be able to “think” to some extent, making their own decisions and controlling their own actions.    The imagery here alone highlights the perceived challenges in relation to Artificial General Intelligence (AGI) and the tendency to view it as good or potentially evil.   How far into the future we will need to look for AGI is unclear with some thinking the accelerating pace of AI means it is sooner than we would like, while others believe it is further into the future.    My sense is that AGI is still some time off as we don’t truly understand how our own human intelligence works and therefore, if we assume AI solutions are largely based on us as humans, then it is unlikely we can create an intelligence to match our own human, general, intelligence.    Others posit that as we create more complex AI solutions, these solutions will help in improving AI which would then allow it to surpass human capabilities and even create super intelligent AI solutions.   Cue the Terminator and Skynet.     Now again, I suspect when we get to the generation of AGI things will not be as simple as they seem, with all AGI’s not being equal.   I suspect the “general” may see some AGIs designed to operate generally within a given domain, such as health and medicine AGIs, or education AGIs, etc.       

Conclusion

Artificial Intelligence solutions can cover a wide range of solutions with my broad discussion of narrow AI, generative AI and AGI being only three broad categories where other areas exist.   It is therefore difficult to discuss AI in its totality certainly not with much certainty.   Maybe we need to be a little more careful in our discussions in defining the types of AI we are referring to, and this goes for my own writing as well where I have equally been discussing AI in its most general form.

Despite this, my viewpoint still remains the same, that AI solutions are here to stay, and as discussed earlier have actually been around for quite a while.    We need to look to accept this and seek to make the best from the situation, considering carefully how and when to use AI, including generative AI, as well as considering the risks and drawbacks.   As to AGI, and the eventual takeover of the world by our AI overlords, I suspect human intelligence will doom the world before this happens.  I also suspect AI development for the foreseeable future will see AI solutions continue to be narrower and short of the near human intelligence of AGI;  As such we definitely need to consider the implications, risks and dangers of using such AI solutions but we also need to consider the positive potential.

AI: A threat to the education status quo?

My original blog post on AI was meant to be a single post however the more I scribbled thoughts down the more I realised there was to consider.    And so with that, this is the fourth of my series of posts on AI.   Having looked at the issue of whether AI is a threat to education in post one, at some benefits of AI in post two, and then some of the risks or challenges around AI in post three, this post will continue to explore some of the ways in which AI might be considered a threat to the current formal education system as it exists across the world.

What are we assessing?

In the last post I started considering how AI challenges the current education system, looking at the fears regarding the use of AI based solutions, like ChatGPT, by students to “cheat”.   This concept of cheating is based on the current education system where students submit work to teachers, where the work is their own work to be used by the teachers to assess and confirm understanding.   So the use of AI to create work which the student presents as their own seems like cheating and dishonesty.  But what if the student only uses the AI as a starting point modifying and refining the content before submission;   Is this ok?    And what degree of refining is enough for the work to be considered as belonging to the student, and what degree is not enough and therefore represents cheating? When is AI a tool, fairly used by a student in proving their understanding and learning?

I think it is at this point we need to ask why we are asking students to complete coursework;  For me it is a way to check their understanding and learning of taught content.   It is one method but not the only one, although it is the method education has generally accepted as the current proxy for student understanding, whether it be GCSE coursework, A-Level or a Degree dissertation.    The uncomfortable truth is that this easy and scalable method of assessment isnt as appropriate in an age of AI.   I will however admit I am not sure what the alternative is, where such an alternative needs to be fair and also scalable to students the world over. When thinking of its scalability I always think, what if life was found on Mars and we have to scale our GCSE coursework and exams to encompass these new lifeforms; It would simply be a case of translating the requirements, sticking them on a rocket and sending them to Mars. As I said, the current setup is very scalable.

And then there is the question of, if my students can use the tools available to them, including AI, to reach an acceptable assessable outcome, is this not good enough?   If the assessments we create make it easy for a student to achieve without having any understanding of the topic or domain they are being assessed on, simply through the use of AI, then maybe we need to rethink the assessments we are setting students in the first place.

Social Contact

Social contact is another areas where there are various concerns around AI.    It may be that in using AI for our studies, our work and even through virtual friends, for companionship, we may see ourselves interacting with human beings less and less, where social contact in a key part of what it means to be human.   For education, if students find themselves learning through personalised AI, learning in their own time, what is the point in school?   And if there is no school, with students learning where and when they like, where will students learn social skills and the skills needed to live with and interact with other humans?    Will we be drawn to our screens and our devices?    Looking around at people on the train I sit on as I write this, I don’t feel we are that far from this scenario already.   So, what is the solution?   For me, in education we need to make sure and achieve a balance between technology and humanity.   If students are to do more learning via screens and personalised AI teachers, and where they may converse with their virtual AI friend, we also need to find opportunities for social interactions, for play, for fun, but also for arguments and debates, simply more opportunities for socialisation. And maybe this is the future for the schools and colleges, that these are the places for socialisation and developing social skills.

Conclusion

AI is here now and here to stay, and as a result of it we need to ask fundamental questions about education as it currently stands.   What are we trying to achieve?   Is the factory model of batches of students taught the same programme still appropriate?     How do we assess learning in a world of AI and actually what should we be assessing?

AI will keep progressing and if we don’t ask questions of our current educational system ourselves, AI will be threat the Times article suggested it will be, as AI will force the questions upon us.    And if education has changed little in over 100 years, I can only imagine how disruptive the sudden forced changes may be. But if we are pro-active it may be that AI is also an opportunity, an opportunity to challenge and reassess the current model of education to find something more suited for the years ahead, years which will invariably involve more and more AI solutions.

Dangers of AI in education

Am now onto my third post in my AI posts following the Times, “AI is clear and present danger to education” article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated with AI (see here) however now I would like to give some thought to the potential negatives.    Now I may not cover all the issues identified in the article however I will hope to address the key issues as I see them.

The need for guardrails around AI

One of the challenges with technology innovation is the speed with which it progresses.  This speed, driven by the wish of companies to innovate, is so quick that often the potential implications aren’t fully explored and considered.   Did we know about the potential for social media to be used to promote fake news or influence political viewpoints for example?   From a technology company point of view the resultant consequences may be seen as collateral damage in the bid to innovate and progress whereas others may see this as more a case of companies seeking profit at any cost.   One look at the current situation with social media shows how we can end up with negative consequences which we may wish we could reverse.   But sadly once the genie is out the bottle, it is difficult or near impossible to put back plus it does seem clear from social media that companies ability and will to police their own actions is limited.    We do however need to stop and remember the positives in social media, such as the ability to share information and news at a local level in real time, connectedness to friends and family irrespective of geographic limitations, leisure and entertainment value and a number of other benefits.

So, with a negative focus, the concern here in relation to the need for AI “guardrails” sounds reasonably well founded however who will provide these guardrails and if it is government for example, wont this simply result in tech companies moving to those countries with less guardrails in place. Companies are unlikely to want to slow down as a result of adhering to government guardrails where this may result in them ceding advantage to their competitors.    And in a connected world it is all the more difficult to apply local restrictions, especially as it is often so easy for end users to simply bypass such restrictions.    Also, if it is government, are government necessarily up to date, skilled, impartial, etc, to make the right decisions?    There is also the issue of the speed with which legislation and “guardrails” can be created, as the related political processes are slow especially when compared with the advancement of technology, so by the time any laws are near to having been passed the issues they seek to address may already have evolved into something new.  To be honest, the discussion of guardrails goes beyond education and is applicable to all sectors which AI will impact upon, with this likely to be most if not all sectors of business, public services, charities, etc.

Cheating

There has been lots of discussions of how students might make use of AI solutions to cheat, with risks to the validity of coursework being particularly notable.    There is clearly a threat here if we continue to rely on students submitting coursework which they have developed on their own over a period of time.   How do we know it is truly the students own work?    The only answer I can see for this is teacher professional judgement and questioning of their students but this approach isn’t scalable.    How can we ensure that teachers across different schools and countries question students in the same way, and make the same efforts to confirm the origin of student work?    Moderation and standardisation process used by exam boards to check teacher marking is consistent across schools won’t work here.    We will also need to wrestle with the question of what does it mean for submitted work to be the students “own” and “original” work.   Every year students submit assessments, and more and more gets written online, and now AI adds to the mix, and with this growing wealth of text, images, etc, the risk of copying, both purposely or accidentally continues to increase.   The recent course cases involving Ed Sheeran are, for me, an indication of this.     When writing and creating was limited to the few, plagiarism was easy to deal with, but in a world where creativity is “democratised” as Dan Fitzpatrick has suggested will occur through use of AI, things are not so simple.

Conclusion

The motives of tech companies for generating AI solutions may not always be in the best interests of the users.  They are after all seeking to make money, and in the iterate and improve model there will be unintended consequences.   Yet, the involvement of government to moderate and manage this innovation isn’t without its consequences, including where some governments own motives may be questionable.   

In looking at education, the scalable coursework assessment model has worked for a long period of time however AI now casts it into question, but was its adoption about being the right way to measure student learning and understanding, or simply the easiest method to do this reliably at scale?  

Maybe the key reason for AI being a threat is the fact that, if we accept it is unavoidable, it requires us to question and critique the approaches we have relied on for years, for decades and even for centuries.