Digital Standards in schools: Consultation

It was with some interest that I read the DfEs consultation in relation to making some of their digital standards compulsory by 2030.  I think the digital standards are a positive step forward, providing guidance to schools to help in developing processes and procedures around technology use in schools, plus helping to guide technology decision making, however equally they aren’t without some limitation.

You can see and respond to the consultation here.  

It was on a Teams call that I first heard of the consultation which looks at making six of the digital standards compulsory.  So, my first act was to try and guess which standards would be involved, with me going for Leadership and Governance, Cyber, Filtering and Monitoring and Broadband.   These felt like the right ones as technology can be expensive, even if not in terms of hardware and software, it is still expensive in training and staff development, especially where wrong technology decisions are made.   As such it seems only logical that leadership and governance would be covered.    You need to have a direction, a strategy, before you look to make any other decisions.  Next was cyber security and filtering and monitoring as they are both areas widely discussed in relation to education, and like leadership and monitoring, these three are very much about leadership, processes, procedures, policies and risk management, all of which can be explored and examined with minimum cost. My next selection was broadband, as this is something which schools can easily assess and act on as soon as any existing contract is up.

At that point I was a little stuck for the remaining two standards, which as I found out, would be Wi-Fi and Switching.   Now I totally get why these would be selected as these are the basics infrastructure comments of technology use.   We can have plans for fancy AI software or plans related to the most advanced end point devices, but without reliable and robust infrastructure, the network switching and Wi-Fi provision, they are of little use.   The challenge her however is one of cost both in terms of the equipment but also the resources to setup and maintain this post install.   Now some money has been promised to support schools in this area, so I see this as a positive step, however I don’t think there is truly an appreciation of the state of IT infrastructure in schools across England so therefore any funding allocation could only really be a guess.   Whether that guess stands up to be enough is yet to be seen, although it is important to note that any investment will move things forwards, so it is way better than nothing.

There is another challenge or concern I have, and it relates to funding.   I have seen in the past where funding gets allocated to support technology in schools however technology investment is not a “one and done”.    Once you invest and once teachers and students start using technology in lessons and around school, you will need to continue to invest just to maintain the status quo, never mind to advance.   This is due to the fact that Wi-Fi access points and switches will need replaced when they go end of life, as will end point devices and the other components which go together to make up the IT in a school.    Using end of life equipment may introduce cyber security risks or reliability risks which in turn could impact on technology use in lessons and on students.   It is funny that the DfE standards do refer to refresh cycles, so I wonder if said refresh cycles will factor in future funding plans.

Another challenge I see in the standards is the fact that they are trying to guide schools where schools exist in very different contexts.   We have large Multi-Academy Trusts (MATs) with strong centralised IT functions, small individual primary schools or large secondary schools with more limited IT resources, and everything in-between and more.   It is therefore difficult for the standards to be uniformly applied to all which would need to be the case if they are to be compulsory, rather than allowing for them to be contextualised and interpreted where they are simply guidance.    There is also the question of why actually will be checking that schools have complied;  I don’t think OFSTED would be able to check this so who would?

Conclusion

I think some schools will have difficulty meeting the digital standards, especially if there is an absence in funding.  That said sometimes what matters is what is measured, and maybe by requiring schools to adhere to the digital standards this will propel Technology up schools’ list of priorities.    

I very much look forward to seeing the results of this consultation although I suspect funding will be the key, particularly around the Wi-Fi and switching standards.  If so maybe the easy solution is simply to apply four standards initially and maybe this could even be done before 2030?

AI and the digital divides

The digital divides are something I have been discussing for a while.   They generally aren’t anything new albeit I always use the plural rather than singular divide.  This is due to my believe that it isn’t a simple single divide but multiple inter-related divides including access to hardware, high speed internet, support, and more.    And in the discussion of AI I have been worried about it adding another divide, but speaking recently at an Edexec live event got me thinking a bit broader.

AI closing divides.

Maybe AI might close divides rather than open them.    If we consider teaching staff, maybe AI in the hands of teachers will result in teachers generally being able to be more creative and engaging with lesson content.    So rather than some students benefitting from creative teachers, being artistic, musically creative, etc, with the skills to turn this into lesson content, AI will put these capabilities into more teachers hands.   You can create something artistic without necessarily being artistic yourself, as long as you have the ideas and can outline to Generative AI.    I think back to teaching during an OFSTED inspection many years ago and I did a lesson on relative vs. absolute cell referencing in Excel using the game of battleships to get the concept across.   I had the skills to make this engaging with video content and more, but I would suggest at that time, some maybe 20 years ago, I would have been in the minority.  Fast forward to today and video and image content can easily be created using AI, putting the potential to create interesting, engaging content in the hands of more teachers than ever before.

We also need to look at student work, such as coursework.   Those students who struggle to get started, or need support finessing and checking their work suddenly have AI tools available to help.   Those students, taught in English but where it is their second or maybe third language now have tools to translate content.   Students with SEND also have AI tools which can help, and this help basically amounts to reducing or even removing the divides which previously existed.   In one discussion after my session at the Edexec event we were discussing coursework and marking with the suggestion that the gap between the best and the worst work will be narrowed through AI.   This may lead to a need to refine marking boundaries, to refine expectations or even to refine the assessment methodologies as a whole, but whichever way you look at it, it is a reduction in some divides.

AI growing the divide

The likely big issue is one of socioeconomic divide and access to AI tools and the required devices, infrastructure and support.   This will be uneven.    But I wonder if it is for schools to solve socioeconomic issues which stretch way beyond schools, into access to health support, opportunities beyond schools, positive family cultures and more.    We do want to seek to address this but am not sure schools have it within their power.

What schools do have in their power is to address the divide which may grow between those students at schools engaging with AI and those schools seeking to try and prohibit and ban AI use.   If we simply accept AI is here, has been for a while, and that we are all using it, and especially that students are using it, then maybe a ban doesn’t make sense.   Maybe we then find ourselves seeking to work with and teach students about AI and abouts its ethical and safe use.

Elephant in the room

And as to the “cheating” narrative, is a pen and paper cheating over having to explain a concept in person?   I would suggest for an introvert a debate or discussion on a concept would put them at a disadvantage, however providing pen and paper shapes thinking and the output.  It encourages slower linear thinking and a type of structure not quite as present in a discussion or debate.    Taking this idea further, what about the students using a laptop or computer as part of their exam concessions; Is this cheating?   Isnt it just about reducing the divide between them and other students?  So why is AI use cheating if it reduces divides?    Maybe we need to start asking students about why and how they used AI, what the benefits were, etc.    And definitely, lets not ask them to reference AI tools as I don’t see the point in this;  They don’t reference which search engine they used, yet this shaped the resources presented to them.    AI is a tool, it is here, so lets getting students using it, but teaching them about its use and getting them to use it safely and ethically.    Yes, some students may try to use it to cheat, but lets treat them as the exception rather than the rule, and develop plans for how we deal with this.   If we don’t believe the work is the students, that it represents what they have learned, then lets just ask them to present or to explain it.

Conclusion

AI is a tool, it is here, and it has the potential to narrow some divides, as well as the potential to widen others.    I doubt there will be a perfect solution so we are going to need to navigate our way through, considering benefit and risk and making the best reasonable decisions possible.    If we can narrow the key divides, where schools have the ability to address such divides, where avoiding widening divides, then this is likely the best we can achieve.      Maybe this will require us to think carefully about the scope of education and schools and what they can reasonably be expected to impact on and start there.

EdExec Live, Herts

I recently had the opportunity to contribute to the EdExec live event in Hertfordshire.  Now I have contributed to EdExec Live events in the past but this is the first time I have done so in Hertfordshire.   I need to admit, as is all too common for me, travel to the event came complete with travel disasters, with me getting easily to London and across London but then subsequent trains being cancelled and delays, leading to an Uber and a total travel time of just over 6hrs.  But enough of my usual travel woes.

I think the first thing of note is my belief in the fact that education, teaching and learning in schools, takes a village.   It requires various people doing various roles.   This includes teachers in the classroom, teaching SLT members, IT staff supporting the IT setup as well as school business leaders and more.   Now am lucky to, as a teacher of many years, contribute to the teaching side of things, and as an ANME ambassador to contribute to the IT side of things, however the EdExec events allow me to contribute to the school business leader side of things.   As I have said many times before, collaboration and sharing is so important or as david Weinberger put it: “the smartest person in the room, is the room”.   As such it is so important that we share widely, including sharing beyond silos associated with specific roles.   So, I am therefore keen to share and be involved with discussion with educational professionals across the various roles which work towards ensuring schools operate and students succeed.

The conference was opened by Stephen Morales from the Institute of School Business Leadership (ISBL) and so much he said aligned with some of my thinking.   Firstly, he mentioned the implications and impact of geopolitics on education.    This was something I heard only a few weeks earlier at an Information Security conference, where it was clear information security and cyber security of organisations, including schools was being impacted by geopolitical issues.     Stephen also mentioned the privilege divide, which refers to socioeconomic divides, and in turn has a direct impact on technology divides.    We clearly need to reduce divides where possible, building equity, however sometimes the easy “solutions” have unintended consequences in this complex world so we need to make sure our decisions are measured and considered.

Stephen referred to the need for collaboration and also to the need to consider technology.   Both of these are things I believe strongly in, believing there is a relationship between the two.   Given how tech changes and advances so quickly we cannot seek to stay up to date on our own so the best solution we have continues to be collective action, to be sharing and discussing and using the wealth of experience, thought and skills of the education sector as a whole.   He also referred to structures, processes, people and technology, and I think this is key, considering not just the technology but the people using it and the processes it is being used for.  This immediately got me thinking about teaching and the TPACK model.

He also mentioned AI which was the focus of the presentation I was giving immediately following his keynote.    You can access my slide here.    Some of my key points from the session where the fact that AI is here now and students are definitely using it, as are many staff.    We can’t put that genie back in the bottle.    As such we need to look to how we can harness AI, and that’s not just generative AI, but includes the various other branches of AI.   We need to look to it’s using in teaching, in helping teachers prepare content and in marking, in learning, putting AI in the hands of students, and also in the administrative aspects of schools, both in the classroom and in the wider school.     I made the point that this isn’t without risk, which was apt when the next session I attended, led brilliantly by Laura Williams, was specifically about risk management.    If we want to benefit from the potential of AI, we will need to deal with the risks.   If we don’t allow use of AI, if we ban it, we don’t need to deal with the risks of AI usage, although there are risks resulting from this, from not teaching about and not allowing AI use.   It’s the balance issue I often talk about.

My session talked about the need for an AI strategy which aligns with the technology strategy which in turn aligns with the school strategy.  They are inter-related.    I also mentioned the need for appropriate foundations, so we cant look at AI without good infrastructure, devices, support and training.    An Ai, and a tech strategy, as well as a school strategy, has to be built on solid foundations.    So chasing the next shiny AI tool, without the fundamentals in place just wont work.

In terms of risks, I mentioned bias and inaccuracies however also mentioned that humans are not short of these challenges either, albeit we don’t always appreciate them.   Data protection continues to be an issue, however Data protection in the world of Ai is often simply good data protection related to any online or technology service.   Obviously automated decision making needs a little more consideration, however how many of the online content platforms schools have been using years, and which recommend and direct students to learning content, aren’t fully transparent as to how their algorithms, their AI, make decisions.

Thinking back to Stephens presentation he mentioned about fears as to AI replacement of humans.    For me, as for Stephen, it is about AI and humans working together, rather than one replacing the other.

The conference was yet another opportunity to share my thoughts and to engage with others as to their thoughts, and some of the discussions I had over lunch were very interesting indeed.    Schools are clearly at different points, and with different contexts, and this for me is fine, however if we wish to move forward I continue to believe in the need to work collaboratively and to share.    I came away from the event with new thoughts and ideas, and I hope those who attended my session came away the same.

TEISS, Infosec summit

Last week saw me attend the TEISS European Information Security summit down in London.  This is one of my annual journeys outside of the education bubble to look at cyber security, resilience and health in the broader industry and enterprise context.   I feel it is always important to try and seek diversity and to seek to avoid falling into the issues associated with existing purely within a silo, so stepping outside of my day to day on a regularish basis is a must.

More of the same, but greater volumes and speed.

If I was to summarise one of my main takeaways from the event, it would be that a lot of what I had heard was similar to what I had heard a year before.    Cybercrime continues to grow in terms of both threat and in terms of its potential impact.    The specific threats, such as ransomware, or social engineering, haven’t really changed but the frequency and speed of attacks has increased.    One particular slide looked at national state actors showing how some countries were now down to a breakout time, from compromise to exfiltration, of under 6 minutes.   Now it isn’t likely that schools will need to face nation state actors, albeit we could end up as collateral damage, however this increase in speed for nation state actors is likely mirrored for other threat actors, including those schools may actually face.      Related to this, one presenter showed screenshots of AI powered cybercrime tools which are now available, highlighting that AI, and in particular Large Language Models, not only have the potential to increase the productivity and efficiency of users, they also have the potential to increase the productivity and efficiency of criminals.   I was aware of FraudGPT and WormGPT so this wasn’t new to me however the subsequent slide provided showed an automation and orchestration platform which criminals could use.    The combination of AI powered creation tools alongside automation tools gives me concern as it would clearly give the criminals the ability to broadly launch convincing attacks but where any compromise can be quickly leveraged before defenders have an opportunity to react.   Think PowerAutomate for criminals.    Lots more, better phishing emails, where user errors are quickly capitalised on to deliver malware, extract data or propagate further attacks.

Geo-political instability

Discussion of the impact of geo-political instability and its impact on information security was very interesting especially in considering the room full of cyber security professionals charged with protecting companies and data, including companies responsible for critical national infrastructure.   From a school point of view, this might seem to be outside of our wheelhouse however on reflection I wonder about our need to educate students in relation to this.    We have already seen that modern warfare now involves a cyber element, with the cyber element often preceding any physical engagement.   Do students need to be aware of the implications of globally connected digital services in a world of increasing conflict along national and geographic borders?   How might these issues directly impact us, but also what about where we are indirectly impacted or where the impact is subtle manipulation via social media.   I suspect there is a whole post possible on this alone.

User awareness and training

I spent a significant part of the conference watching sessions within the Culture and Education stream.    There was some good discussion in relation to culture and testing of cyber resilience, particularly the use of phishing awareness testing.   These tests are very good at giving us a snapshot or even a longitudinal view as to our general cyber resilience, however they aren’t as useful at an individual user level.    To present a staff member or student with some additional training material to undertake following them falling for a phishing test, doesn’t find them are their best in terms of their potential to learn.     One presenter presented an alternative view suggesting that all users mean to do the right thing, so therefore we should be asking what it is that makes them do the wrong thing, rather than focusing on how we change individuals behaviour.    For me this very often comes down to being time poor and therefore being in a rush or suffering workload issues so I am not sure quite what we can do about this.   In my view, the world and our roles only see us adding more tasks and activities, and very seldom do we take things away, therefore it is no wonder that we are time poor and therefore no wonder that in our hurry we fall for social engineering and for phishing emails.  That said, it is definitely worth the conversation as to what the barriers to good cyber behaviours are and then looking to see if there is any way to address them.   I suspect we wont solve the issue, but I bet there will be some possible quick wins.

Recovery over prevention

One presenter made a very interesting observation that we continue to spend too much time focussed on prevention over spending time looking at how we might respond and recover from an incident.   I can immediately see why we might focus on prevention, as if a cyber incident doesn’t happen, then things are all good.   The reality however is that cyber incidents are almost guaranteed.    And if we accept that an incident is definitely going to happen at some point in the future then we are better spending a little less time focussed on prevention and a little more on considering what we will do when an incident does happen.    This can easily be done through desktop exercises, and doing so is always preferable to actually having to work it out when the world is on fire in the midst of a real cyber incident.   And to that end I actually delivered a little exercise only the other day.

People, Processes and Technology

One of the biggest takeaways from the event was the mention of People, Processes and Technology (PPT for short, and not the Microsoft App).    Sadly all to often we focus on Technology.   How can we technically keep data secure?  How can IT deliver training to those clicking a phishing link?   What we need to do more of is to consider the people involved and their impact, as well as the processes.   If we consider people, processes and technology we likely will have the best opportunity of keeping things secure and safe.   And I note, that considering people, processes and technology isn’t just an infosec thing, it can equally be applied to school technology strategy, to use of technology in classrooms, and much more.

I suspect as we continue to make use of more technology and as technology further pervades every aspect of our lives, we need to increasingly seek to look to the human contribution and to human behaviour, rather than getting so focussed on the tech.

AI: Time to give up pen and paper?

I have been reading the Experience Machine by Andy Clark off and on for quite a few months however the other day, on a trip down to London for an InfoSec event I found myself on a train to London, where once again I had an opportunity to do some reading.    It wasn’t long before I was reading Clark’s thoughts as to the extended mind and it got me thinking of the current discussion in relation to AI use in schools, and in particular by students for “cheating”.

Clark talks about how humans have sought to extend their capabilities through the use of tools, including both basic tools like the pencil as well as technological tools like devices and apps.   He makes the point that rather than just being a tool which is used, that the use of tools results in fundamental changes to our thinking processes, to our minds.   We have developed a species through our ability to use tools and to adjust our thinking processes around these tools, in order to do something more than we could prior to the use of the tools.

Taking this into the world of education, I have repeatedly talked about the JCQ guidance in relation to Non-Examined Assessments (NEAs) where it talks about making sure the work is the students own work.    Well, if we take Clarks comments, then the output produced by a student using the tools of a pen and paper, was shaped not just by the students but by the pen and paper they used.   The pen and paper shaped thinking processes, ordering and more, influencing what the student produced.  Maybe the sheet of paper will influence how much the student produces?   Maybe the difficult in erasing content written in pen will influence the students decision making as to whether to change or remove sections they have written.   So is it still the students own work?

Considering a different tool, this time a laptop in the hands of a student with exam concessions which facilitate their typing rather writing.   Again, I would agree with Clark that the tool, rather than being just a tool, changes the thinking processes.   With a laptop a student can more easily shift and reform their thoughts and ideas, moving paragraphs around and erasing or adding content as needed.  This means processes related to the ordering of content which might be needed when using pen and paper are no longer as important.   A student with a laptop might be more willing to take risks and explore their writing knowing they can easily change, add or edit, whereas a student with pen and paper may be a little more risk adverse, and therefore more creatively limited.

So now let me take a leap, and I suspect some will see it as a leap too far.    What if the tool rather than just pen and paper, is actually a generative AI solution.    The interactions with the AI, assuming the student has been taught to use AI and has developed the appropriate skills, will shape the students thinking processes.   Maybe the broad training data of the AI will result in the student considering aspects of the topic they may not have otherwise explored.   Maybe their language will change, making greater use of more academic language as a result of the academic content which makes up the AIs training data.  Maybe their language will be a bit more flowery and expressive than they might write without an AI tool.   As with the laptop, AI may make the student even more creative and less risk adverse, knowing they can easily edit but that they can easily get feedback and make iterative improvements.    Is this any less the students own work?

I need to be clear here, that I am not suggesting we just jump on the AI bandwagon without thinking.   We definitely need to consider the risks and challenges and to seek to find a path towards the ethical, responsible and safe use of AI schools.    But, we also need to acknowledge we now use many tools which we would not give up.   We would not give up the pen and pencil, the calculator, email and much more, and each of these is more than just a tool for use.    As we have become accustomed to use them they have changed how we as humans think and operate.   These tools have changed how our minds operate.    AI will do the same, and we need to think about it, but if our reason for not using AI is that it will change us, it is cheating or produces things which are not our own work or truly not representative of the real me, then does this mean that we need to give up all other tools including pen and paper and the written word?

Apple, governments, privacy and public good

Apple recently announced they are no longer providing Advanced Data Protection (ADP) for UK based customers in response to a request by the UK government.    ADP basically amounts to end to end encryption meaning only the user themselves can decrypt and access their data.    The press is largely carrying headlines focused on the negative impact on user privacy of this decision, either deriding Apple for reversing their long established position in relation to the privacy of user data or deriding the UK government for pushing Apple into this position.    And as always reporting tends to be very binary but the reality is things are a little more nuanced than that so I thought I would share my thoughts.

Removing ADP

So, what does this removal amount to?    Basically, in my reading of it, it amounts to the removal of encryption of your data at rest.  What this means is that your data continues to be encrypted in transit, so as it traverses the air, via 4G/5G or Wi-Fi, as it traverses the internet to its final destination being Apples servers.    So, a criminal, or another unscrupulous threat actor, intercepting data in transit will only get your data in its encrypted form and therefore be unable access it in its raw form.      The change comes at the point the data is stored on Apples servers.     Here, without ADP, the data will be stored in its unencrypted form allowing for Apple to access the data, or for Apple to share the data with law enforcement or other government entities, or for criminals to access the data should they gain access to apples servers.

So what does this mean for privacy?

The fact that the data is now unencrypted at rest amounts to a reduction in privacy and an increase in risk for individuals.   This is due to several reasons.   Firstly, an unscrupulous Apple employee could access your data, or maybe an Apple employee might be able to blackmailed or social engineered to give away data.    As Apple have the relevant encryption keys to decrypt your data, it may be that a criminal gains access to these and therefore is also able to decrypt your data having intercepted it in its encrypted form in transit.   And there is also the issue of unscrupulous governments using the same methods as the UK government to force Apple to remove end of end encryption and then demanding access to data in order to target dissidents or those who are vocal about the government, all under the guise of national defence or anti-terrorism.   Basically, your data without ADP is not as secure and private as it would be with ADP.

Why would anyone want to reduce privacy?

This all leads to the question of why the UK government would push Apple towards this decision.    The answer is one of national security and public good largely.     Privacy is a great thing however its benefits are felt by all and that includes terrorists, criminals, users sharing child sexual abuse materials (CSAM), etc.    With end-to-end encryption there would be no method for police or security services to investigate content as they simply wouldn’t be able to access it.  They would need to arrest the criminal end user and get them to unlock their device to be able to access content.    This would limit the potential for investigation to be carried out quietly in the background, which might also limit the potential for preventative measures as opposed to reactive measures.    And I note, when things do go wrong the press is quick to identify when people have been on watch lists, etc, but what use is a watch list if you have no way to actually see what users are actually doing?   Hindsight is 20/20 but with ADP enabled foresight would be encrypted.

Balance

The challenge here is we are trying to balance the risks to individual privacy, as experienced by all users in the UK in this instance, with the need to identify those who may seek to cause harm, distress or even death.    I don’t believe there is a perfect solution sadly.    It is about risk-based decision making.   

My belief is that the net impact of the removal of ADP is negative.   It impacts and increases risk for all users while those who the UK government may seek to monitor or discover will simply shift to using non-Apple services and devices, thereby meaning the gain from the removal of end-to-end encryption will be minor if any gain exists at all.    And additionally, the fact Apple have ceded to the request of the UK government will likely mean other governments will request the same, although for some the motivation may be more related to their own aims rather than anything related to public good or safety.

Conclusion

There is, in my view, an increasing level of friction between public good and personal privacy, with this particular issue related to Apples ADP service being the most recent and public example.    We sadly cannot have privacy, but only for some or at certain times.   Its privacy for all or for no-one, and where we opt for privacy for all we need to accept this will include those who seek to use privacy to cover illegal, immoral or unethical activities.     This news story also highlights the challenges related to national legislation of international companies.    In both cases, I think these are issues we should be discussing with our students as part of digital citizenship programmes, as these issues are only likely to grow in frequency.

Sadly the press pick a good news headline which is good for getting readership rather than conveying the more nuanced nature of the situation.   Maybe this also highlights the need for critical thinking skills to, so we can see through the black and white headlines, into the various shades of grey which are more representative of the real world.

AI and Coursework

Coursework continues to be a significant part of qualifications, whether this be GCSEs, A-Levels, or vocational qualifications like the BTec qualifications.   In BTecs coursework is the main assessment methodology, where this hasn’t changed that much since I actually had a hand in writing some BTec units and acting as a standards verifier.   The world around these qualifications, though, has changed particularly with the availability of Generative AI, so how do schools manage the use of AI by students, the requirements of examining bodies and the ethical need to ensure fairness in marking and assessment?

Firstly lets just accept students are using AI.   This is a statement which I myself have made and that I have heard others make.   The challenge is that we are often referring to ChatGPT, Gemini, Claude and the likes, and to things post November 2022.   The reality is that students were using AI prior to that.   They were using spellchecker, they were using grammar checkers and they were using google for searches.   Each of these involves AI.   AI isn’t new so let dispense with the concern regarding students using AI to cheat.

A students “own” work

So, when looking at coursework or NEAs (Non-examination assessments) JCQ states that work “an individual candidate submits for assessment is their own”.    At face value this makes sense but what constitutes the students “own” work.   This blog piece for example has seen AI highlight spelling errors which I have since corrected, plus I have had suggested alternative sentence structures and grammatical changes recommended, with AI behind these recommendations.    With these changes is it still my own work?     And in this case I am writing this directly from my thoughts rather than with a structure however if I asked AI for some help on the structure of the blog piece before writing would it still be mine?    Having completed I posted this on my site, but I could have fed it into AI for feedback and suggested improvements;  would the resultant blog post still be mine?   And how is this use of GenAI different from using spell check and grammar checker and the editor built into word?    In all cases it results in a piece of work which wasn’t what I originally typed, but is likely better.

Referencing: Why bother?

JCQ mentions that candidates must  not “use the internet without acknowledgment or attribution”.   Again, on face value this seems fair, but what about spellchecker and grammar checker.    In all my years I have never seen anyone reference Microsoft or Googles spelling and grammar checkers yet I am pretty sure they have almost always been used.    So why might Grammarly or ChatGPT or even the Editor in MS Word be different?     

And if we accept that students are using spellchecker, grammar checker and almost certainly using generative AI tools, surely they just end up noting they are using them which doesn’t seem to help from an assessors point of view.   With a traditional reference to a book an assessor could at least go and look it up, but when a student uses generative AI exactly how do I cross reference this.   And if I cant what is the value in the reference, and especially so if almost every student basically states they made some use of AI including generative AI.

Coursework: A proxy for learning

The challenge here is that we are using coursework as a proxy for testing a students learning, their knowledge and understanding.   It used to be that a piece of coursework was a good way to do this, then we got Google.   We now needed to check for unusual language, etc and then use Google itself to try and prove where students had plagiarised.  And more recently we have generative AI and things are a bit more difficult still.    We can no longer use Google to check the document for plagiarism and don’t get me started on AI detection solutions, as they simply don’t work.  

Maybe therefore we need to go back to basics and if in doubt speak to the student.   If we are unsure of the proxy, of coursework, then we need to find another way to cross check or to assess.   This could be a viva, asking the students to explain what they meant within sections of their coursework, or asking them to provide examples, or we could ask them to present rather than write their coursework.  In each case we get to assess the students confidence, body language, fluency, etc, in relation to the topic being assessed, rather than just what they have written down.   So maybe rather than seeking to block or detect AI use, we need to accept that we need to find new ways to assess.

A way forward?

A key starting point, in my view, with students in that of education.    Students need to know what AI is and understand what is acceptable in terms of AI use.    They need to understand the difference between using AI tools as an aid, such as spellchecker, grammar checkers and even generative AI, versus using it to do the work.    It might be fair to get help with my work in eliminating spelling errors.   It might also be fair to help me in better structuring my thoughts or my written words.    But it isn’t fair if the AI writes the piece of work for me and I just present it as my own but where there is no real effort on my part, no real sense of my views, in what is produced.   I suppose it’s a bit like discussing the work with a friend;   If we discuss the work which leads to a better result produced by me then this is good, but if my friend does the work for me then this isn’t.    But things are a little more nuanced than that sadly, so how much help is acceptable?

The challenge with the above is that some students will use AI correctly and some will, for various reasons, use it incorrectly or even dishonestly.   How will we tell?    I suspect some of this is down to professional judgement and knowing our students and some is audit tracking tools such as version history.    That said I think the easiest way for us to tell is to get to the root of learning and ask the students to explain that which they have submitted or at least part of it.    If it’s a good piece of work and they can explain it, then clearly they have learned the content and the work is representative.     If it’s a good piece of work and they cant explain it then it isn’t and therefore they shouldn’t get credit.

12 Years of Blogging

It was now 12 years yesterday that I posted my first ever blog post (see here).  Not sure where the time has gone but it has seen me move from the UAE and working with schools mainly in Abu Dhabi and Dubai, to working in Somerset at Millfield, but also working with the Digital Futures Group and Association of Network Managers in Education (ANME) trying to support schools across the UK and beyond.  

My first post involved me sat on the bed in the evening on the 12th Feb 2013 posting my first thoughts.   I am now 550 further posts on and my blog has afforded me the opportunity to share my thoughts, but also has forced me to structure my thoughts in order to write them down and has allowed me to keep a permanent record of how my thinking has progressed and changed over the intervening 12 years.   I think sometimes we aren’t as conscious of how our own views and beliefs change and develop over time and with age and experience as we should be.

It has also been great to meet people and connect with people that actually have read some of my posts.   This includes meeting online with discussion via social media, but also meeting in person at events including events such as BETT or the Schools and Academies Show (SAAS).    I continue to believe that networking and sharing is important, and if we take into consideration the pace of technological change and the potential, or even requirement, for the use of technology in schools, it becomes all the more important.    I keep coming back to the David Weinberger quote, “the smartest person in the room, is the room”, so I can but hope my posts continue to contribute to the room of global educators sharing online.

Here’s to continuing to post, to share and to the year ahead.  And for those thinking about creating a blog or posting or sharing my advice is simple:  just do it!

BETT 2025: Cyber resilience and schools

On the Friday afternoon of BETT 2025 I had the opportunity to deliver a session on cyber security for education, called “cyber resilience and schools: lets get pragmatic”.   Now I will admit I was a bit worried with it being a day three afternoon session, would anyone turn up, however the session was very well attended which was great.     One thing I will note though is that when I asked about the roles of the various people in the audience, around 95% of them were from technical IT roles.    I get why this would be the case however I worry that this is symptomatic of cyber incidents still being see as an “IT” issue rather than a school wide issue.   When an incident happens, although IT will be the people working hard to resolve it, it will be the whole school which is impacted including in relation to administrative tasks like registration and parental contact, teaching and learning, pastoral and wellbeing support and much more.    Cyber resilience, or cyber security if your prefer that term, needs to be seen as a school wide issue so my thanks and applause go to the small number of school leaders who attended my session, and I hope they found it useful.

My presentation broke down into four main areas, being the current context of schools and cyber security, the need for risk assessment, the need for incident preparation, and the basics which schools need to be doing to limit risk including reducing likelihood and impact of an incident.

In relation to the context it is pretty easy to see the impact and risk in relation to cyber and schools with one school being forced to remain shut at the start of the first week of BETT due to a cyber incident.   The ICO also acknowledged that reported incidents in 2023 had grown 55% over those in 2022.   If putting a cost figure to things, cyber crime world wide is estimated to reach $10.5 trillion dollars this year.   So cyber crime will definitely continue and will continue to hit schools.    One key challenge for schools though is the limited budget available, both financially and staff resource related, to tackle cyber risks and cyber resilience.    This highlights the challenge for schools however I noted a discussion in an industry event where they talked of whether doubling cyber related budgetary spend might half the risk;  The common consensus was probably not.    So, cries for more money, although money would help, would not solve the challenge.

It is therefore about risk management and balance.   Schools can be more secure but in doing so this might impact on flexibility, and therefore on the educational experience of students.    We need to seek to risk assess, identifying our risks, their likelihood and impact, plus the mitigation we could or have put in place, complete with any implications of such mitigation.   Once we know our risks we can plan accordingly in terms of mitigation or incident planning.

My next main point was the need to accept that cyber incidents are a “when” rather than an “if”, and based on this we need to prepare ourselves.    For me this is where desktop exercises are useful, actually working through an example incident with colleagues to identify what needs to be done, by who and when, plus to identify any assumptions which may have been made in terms of how an incident would be responded to.    Now this was one of the exercises from my session however the key value is in conducting such exercises in your own school, with a cross section of your own staff and therefore where the exercise can be tailored to the specific needs and context of the school.    It is all about thinking about the processes in a safe environment of a desktop exercise rather than in the heat of battle in the event of a real life incident.

The last section of my presentation, which may feel a little backwards in relation to having looked at risk management and incident planning first, was that of how we might pragmatically delay an incident occurring or limit its impact.    As I mentioned earlier we don’t have the resources of enterprise organizations so we cant simply throw money or resources at the problem.    For me this therefore means we need to seek to do the basics in terms of cyber resilience.    This refers to forcing MFA, patching as many servers as we can, providing users only with the access they truly need, etc.   It is these basics that will reduce the risk level for our school and college, and hopefully see criminals moving along to the next school or organisation in the hope of an easier target.   And generally the basic steps don’t cost the earth, other than some time to undertake them.

Conclusion

My summation for the session was very much about the need for cyber resilience to be seen as a school wide issue and therefore for it to be discussed at the highest levels including governors/trustees and senior leadership.    They need to have a sense on the risks being faced and guide in relation to seeking to address these risks.   They may not know the technical side however they set the risk appetite and therefore guide the spending of resources, including IT staffing, plus the balance between security and flexibility, which includes flexibility in the classroom.    They should also be central to considering the “what if” scenario and considering how the school might respond to cyber incidents such as data breaches, ransomware, etc.    It is better to prepare than to have to work out what you are going to do while in the midst of a cyber crisis.   And lastly is the basics, we simply need to do these as they are the most cost effective method to delay or limit the impact of a cyber incident.

Cyber crime isn’t going away, so we need to plan and prepare, and not just the IT staff. 

Now if you wish to review my slides or the resources, which included some cyber incident cards for a risk assessment exercise, then you can access them here via Google Drive.

BETT 2025: reflections part 2

Continuing on my reflections of the BETT conference from my previous post, I found sir Stephen Frys discussion with Dr Anne-Marie Imafidon to be quite interesting in the exploring “science reality” and how some things science fiction have come to pass, plus in looking at how Artificial Intelligence (AI) has actually been around for some time.    In terms of science reality, I did a presentation last year where I referenced an episode from Star Trek: The Next Generation, where it appears that captain Piccard is using a device very much like an iPad or other tablet device.     It is notable the episode aired in the 1980’s and focused on events from the 23rd century, when in fact the iPad made its appearance in 2010.   For me this highlights that science fiction sometimes presents us with novel and interesting ideas, that people then seek to make happen.   It also highlights that we are pretty crap at predicting the “when” of things with any real accuracy.

In terms of the longevity of AI, the concept has been discussed since the 1950’s with period of progress and then periods of quiet, with one particular quiet period known as the AI winter.    The reality is that the current progress of AI, as discussed by the sir Stephen and Dr Imafidon, is likely the juncture between increasing computing poor and increasing “oil fields of data”.    I found the point regarding how we “sleepwalked into the internet age” interesting, highlighting we cannot do the same with AI, but did we truly know what the impact of the internet was going to be, so can we truly know what the impact of AI might be?      I also found discuss of how social media focussed on “maximising engagement” to hit the nail of the head especially when this was expanded to include maximising bias, hatred and other negatives.   The term socio-technical skills as something we should possibly seek develop, was a new one on me, but I can see the point.

The discussion then progressed to education and assessment categorising the implications of ChatGPT for cheating as a minor issue brought about by the education system we currently have.    This aligns with some of my views on the need to reform education.    Education is not about tests or coursework, it is about learning.  It isn’t about grades.    I found the comment regarding our current system “testing for ignorance” and then pushing it, to be a particular telling and critical assessment of the world we consider to be education.    In the roundtable on assessment I took part in, that was one of our discussions regarding how coursework and exams are simply scalable for use across different schools, regions and countries, so we use them due to this scalability rather than because they are the best thing for education or for learning, or for our students.   

As the discussion moved back more towards AI there was an interesting discussion on AI development in terms of how we often describe AI as currently being the worst it will be, and that it is constantly improving.  This is fair to a point but sir Stephen referred to the internet as “filling with slop” and “contaminated” and if we assume that AI continues to use the AI in its training data then it too may become contaminated so it maybe possible to suggest that AI might be at its best now and only get worse as it becomes more contaminated by its own “slop”.    And who controls the AI and its development.  It was suggested that the three worst options might be the three groups most likely to lead the way on AI development, being countries, corporations and criminals.   In all three cases I can see the outcomes being far from positive and we can already see the internet being used to political and national ends, for pure commercialisation, consumerisation or profit, or for crime.   

I could likely write a whole series of blogs based on the session by sir Stephen and Dr Imafidon however rather than focussing on that I just want to share how they finished the discussion, on the need to find the “sweet spot”.   The need to find a balance between pessimism and optimism.   Now this aligns very much with my view of balance, in that most good things will have some balancing drawbacks or challenges.   We need to try and find our way and find the best middle group, the “sweet spot”.

The next session I watched before hitting the BETT conference floor was a session titled “Education in the AI era”.   Again I could write a lot about what was said as I found it to be very interesting indeed but am going to avoid doing that.    One key comment mentioned 30% of teachers not using AI;  My sense is this figure is lower than the reality.   The data came from TeacherTapp which I think is great, but I also think that the subset of teachers using TeacherTapp are likely to be those who are a little more tech savvy and therefore likely to use AI, and that a greater proportion of those who don’t use AI will also not be using TeacherTapp.  The bigger and possibly more important question is why some teachers who know of AI aren’t using it;  Is it they don’t know they are using AI, but are, that they don’t have access, lack training, lack confidence or something else?    In terms of access, this session also mentioned access to technology and affordance, which to me links to the concept of digital divides.

I also liked the discussion on banning and blocking AI where they compared it to knives in food tech.   Why would we ban AI in some or all subjects when we know knives can be dangerous, yet don’t ban them?    Now I know that this is a very simplistic and flawed analogy and that it was likely used for effect rather than accuracy, but I think the point is valid;  How often has prohibition of anything ever been beneficial or effective?   It just tends to make people do it more, but do it in secret.

This session finished on the big question, which had also been raised the previous night at the Edufuturists event, in terms of what the purpose of education is?     In terms of what we measure, tests, coursework, grades, are these what truly matters?   And if not, what does matter, and how might we measure it, assuming we need to?

That’s some pretty deep questions to end this post on, but that’s where I found myself and I was still in the morning of day 1 of BETT.   The afternoon would see me getting around the event and doing the networking side of things, which for me is one of the main benefits of BETT, but the sessions from the morning, and some of the other sessions I attended across the conference were also very beneficial in stimulating thoughts and ideas, and in some places in confirming or challenging some of my thinking.    Next BETT post to follow soon……….