Digital Standards in schools: Consultation

It was with some interest that I read the DfEs consultation in relation to making some of their digital standards compulsory by 2030.  I think the digital standards are a positive step forward, providing guidance to schools to help in developing processes and procedures around technology use in schools, plus helping to guide technology decision making, however equally they aren’t without some limitation.

You can see and respond to the consultation here.  

It was on a Teams call that I first heard of the consultation which looks at making six of the digital standards compulsory.  So, my first act was to try and guess which standards would be involved, with me going for Leadership and Governance, Cyber, Filtering and Monitoring and Broadband.   These felt like the right ones as technology can be expensive, even if not in terms of hardware and software, it is still expensive in training and staff development, especially where wrong technology decisions are made.   As such it seems only logical that leadership and governance would be covered.    You need to have a direction, a strategy, before you look to make any other decisions.  Next was cyber security and filtering and monitoring as they are both areas widely discussed in relation to education, and like leadership and monitoring, these three are very much about leadership, processes, procedures, policies and risk management, all of which can be explored and examined with minimum cost. My next selection was broadband, as this is something which schools can easily assess and act on as soon as any existing contract is up.

At that point I was a little stuck for the remaining two standards, which as I found out, would be Wi-Fi and Switching.   Now I totally get why these would be selected as these are the basics infrastructure comments of technology use.   We can have plans for fancy AI software or plans related to the most advanced end point devices, but without reliable and robust infrastructure, the network switching and Wi-Fi provision, they are of little use.   The challenge her however is one of cost both in terms of the equipment but also the resources to setup and maintain this post install.   Now some money has been promised to support schools in this area, so I see this as a positive step, however I don’t think there is truly an appreciation of the state of IT infrastructure in schools across England so therefore any funding allocation could only really be a guess.   Whether that guess stands up to be enough is yet to be seen, although it is important to note that any investment will move things forwards, so it is way better than nothing.

There is another challenge or concern I have, and it relates to funding.   I have seen in the past where funding gets allocated to support technology in schools however technology investment is not a “one and done”.    Once you invest and once teachers and students start using technology in lessons and around school, you will need to continue to invest just to maintain the status quo, never mind to advance.   This is due to the fact that Wi-Fi access points and switches will need replaced when they go end of life, as will end point devices and the other components which go together to make up the IT in a school.    Using end of life equipment may introduce cyber security risks or reliability risks which in turn could impact on technology use in lessons and on students.   It is funny that the DfE standards do refer to refresh cycles, so I wonder if said refresh cycles will factor in future funding plans.

Another challenge I see in the standards is the fact that they are trying to guide schools where schools exist in very different contexts.   We have large Multi-Academy Trusts (MATs) with strong centralised IT functions, small individual primary schools or large secondary schools with more limited IT resources, and everything in-between and more.   It is therefore difficult for the standards to be uniformly applied to all which would need to be the case if they are to be compulsory, rather than allowing for them to be contextualised and interpreted where they are simply guidance.    There is also the question of why actually will be checking that schools have complied;  I don’t think OFSTED would be able to check this so who would?

Conclusion

I think some schools will have difficulty meeting the digital standards, especially if there is an absence in funding.  That said sometimes what matters is what is measured, and maybe by requiring schools to adhere to the digital standards this will propel Technology up schools’ list of priorities.    

I very much look forward to seeing the results of this consultation although I suspect funding will be the key, particularly around the Wi-Fi and switching standards.  If so maybe the easy solution is simply to apply four standards initially and maybe this could even be done before 2030?

AI and the digital divides

The digital divides are something I have been discussing for a while.   They generally aren’t anything new albeit I always use the plural rather than singular divide.  This is due to my believe that it isn’t a simple single divide but multiple inter-related divides including access to hardware, high speed internet, support, and more.    And in the discussion of AI I have been worried about it adding another divide, but speaking recently at an Edexec live event got me thinking a bit broader.

AI closing divides.

Maybe AI might close divides rather than open them.    If we consider teaching staff, maybe AI in the hands of teachers will result in teachers generally being able to be more creative and engaging with lesson content.    So rather than some students benefitting from creative teachers, being artistic, musically creative, etc, with the skills to turn this into lesson content, AI will put these capabilities into more teachers hands.   You can create something artistic without necessarily being artistic yourself, as long as you have the ideas and can outline to Generative AI.    I think back to teaching during an OFSTED inspection many years ago and I did a lesson on relative vs. absolute cell referencing in Excel using the game of battleships to get the concept across.   I had the skills to make this engaging with video content and more, but I would suggest at that time, some maybe 20 years ago, I would have been in the minority.  Fast forward to today and video and image content can easily be created using AI, putting the potential to create interesting, engaging content in the hands of more teachers than ever before.

We also need to look at student work, such as coursework.   Those students who struggle to get started, or need support finessing and checking their work suddenly have AI tools available to help.   Those students, taught in English but where it is their second or maybe third language now have tools to translate content.   Students with SEND also have AI tools which can help, and this help basically amounts to reducing or even removing the divides which previously existed.   In one discussion after my session at the Edexec event we were discussing coursework and marking with the suggestion that the gap between the best and the worst work will be narrowed through AI.   This may lead to a need to refine marking boundaries, to refine expectations or even to refine the assessment methodologies as a whole, but whichever way you look at it, it is a reduction in some divides.

AI growing the divide

The likely big issue is one of socioeconomic divide and access to AI tools and the required devices, infrastructure and support.   This will be uneven.    But I wonder if it is for schools to solve socioeconomic issues which stretch way beyond schools, into access to health support, opportunities beyond schools, positive family cultures and more.    We do want to seek to address this but am not sure schools have it within their power.

What schools do have in their power is to address the divide which may grow between those students at schools engaging with AI and those schools seeking to try and prohibit and ban AI use.   If we simply accept AI is here, has been for a while, and that we are all using it, and especially that students are using it, then maybe a ban doesn’t make sense.   Maybe we then find ourselves seeking to work with and teach students about AI and abouts its ethical and safe use.

Elephant in the room

And as to the “cheating” narrative, is a pen and paper cheating over having to explain a concept in person?   I would suggest for an introvert a debate or discussion on a concept would put them at a disadvantage, however providing pen and paper shapes thinking and the output.  It encourages slower linear thinking and a type of structure not quite as present in a discussion or debate.    Taking this idea further, what about the students using a laptop or computer as part of their exam concessions; Is this cheating?   Isnt it just about reducing the divide between them and other students?  So why is AI use cheating if it reduces divides?    Maybe we need to start asking students about why and how they used AI, what the benefits were, etc.    And definitely, lets not ask them to reference AI tools as I don’t see the point in this;  They don’t reference which search engine they used, yet this shaped the resources presented to them.    AI is a tool, it is here, so lets getting students using it, but teaching them about its use and getting them to use it safely and ethically.    Yes, some students may try to use it to cheat, but lets treat them as the exception rather than the rule, and develop plans for how we deal with this.   If we don’t believe the work is the students, that it represents what they have learned, then lets just ask them to present or to explain it.

Conclusion

AI is a tool, it is here, and it has the potential to narrow some divides, as well as the potential to widen others.    I doubt there will be a perfect solution so we are going to need to navigate our way through, considering benefit and risk and making the best reasonable decisions possible.    If we can narrow the key divides, where schools have the ability to address such divides, where avoiding widening divides, then this is likely the best we can achieve.      Maybe this will require us to think carefully about the scope of education and schools and what they can reasonably be expected to impact on and start there.

EdExec Live, Herts

I recently had the opportunity to contribute to the EdExec live event in Hertfordshire.  Now I have contributed to EdExec Live events in the past but this is the first time I have done so in Hertfordshire.   I need to admit, as is all too common for me, travel to the event came complete with travel disasters, with me getting easily to London and across London but then subsequent trains being cancelled and delays, leading to an Uber and a total travel time of just over 6hrs.  But enough of my usual travel woes.

I think the first thing of note is my belief in the fact that education, teaching and learning in schools, takes a village.   It requires various people doing various roles.   This includes teachers in the classroom, teaching SLT members, IT staff supporting the IT setup as well as school business leaders and more.   Now am lucky to, as a teacher of many years, contribute to the teaching side of things, and as an ANME ambassador to contribute to the IT side of things, however the EdExec events allow me to contribute to the school business leader side of things.   As I have said many times before, collaboration and sharing is so important or as david Weinberger put it: “the smartest person in the room, is the room”.   As such it is so important that we share widely, including sharing beyond silos associated with specific roles.   So, I am therefore keen to share and be involved with discussion with educational professionals across the various roles which work towards ensuring schools operate and students succeed.

The conference was opened by Stephen Morales from the Institute of School Business Leadership (ISBL) and so much he said aligned with some of my thinking.   Firstly, he mentioned the implications and impact of geopolitics on education.    This was something I heard only a few weeks earlier at an Information Security conference, where it was clear information security and cyber security of organisations, including schools was being impacted by geopolitical issues.     Stephen also mentioned the privilege divide, which refers to socioeconomic divides, and in turn has a direct impact on technology divides.    We clearly need to reduce divides where possible, building equity, however sometimes the easy “solutions” have unintended consequences in this complex world so we need to make sure our decisions are measured and considered.

Stephen referred to the need for collaboration and also to the need to consider technology.   Both of these are things I believe strongly in, believing there is a relationship between the two.   Given how tech changes and advances so quickly we cannot seek to stay up to date on our own so the best solution we have continues to be collective action, to be sharing and discussing and using the wealth of experience, thought and skills of the education sector as a whole.   He also referred to structures, processes, people and technology, and I think this is key, considering not just the technology but the people using it and the processes it is being used for.  This immediately got me thinking about teaching and the TPACK model.

He also mentioned AI which was the focus of the presentation I was giving immediately following his keynote.    You can access my slide here.    Some of my key points from the session where the fact that AI is here now and students are definitely using it, as are many staff.    We can’t put that genie back in the bottle.    As such we need to look to how we can harness AI, and that’s not just generative AI, but includes the various other branches of AI.   We need to look to it’s using in teaching, in helping teachers prepare content and in marking, in learning, putting AI in the hands of students, and also in the administrative aspects of schools, both in the classroom and in the wider school.     I made the point that this isn’t without risk, which was apt when the next session I attended, led brilliantly by Laura Williams, was specifically about risk management.    If we want to benefit from the potential of AI, we will need to deal with the risks.   If we don’t allow use of AI, if we ban it, we don’t need to deal with the risks of AI usage, although there are risks resulting from this, from not teaching about and not allowing AI use.   It’s the balance issue I often talk about.

My session talked about the need for an AI strategy which aligns with the technology strategy which in turn aligns with the school strategy.  They are inter-related.    I also mentioned the need for appropriate foundations, so we cant look at AI without good infrastructure, devices, support and training.    An Ai, and a tech strategy, as well as a school strategy, has to be built on solid foundations.    So chasing the next shiny AI tool, without the fundamentals in place just wont work.

In terms of risks, I mentioned bias and inaccuracies however also mentioned that humans are not short of these challenges either, albeit we don’t always appreciate them.   Data protection continues to be an issue, however Data protection in the world of Ai is often simply good data protection related to any online or technology service.   Obviously automated decision making needs a little more consideration, however how many of the online content platforms schools have been using years, and which recommend and direct students to learning content, aren’t fully transparent as to how their algorithms, their AI, make decisions.

Thinking back to Stephens presentation he mentioned about fears as to AI replacement of humans.    For me, as for Stephen, it is about AI and humans working together, rather than one replacing the other.

The conference was yet another opportunity to share my thoughts and to engage with others as to their thoughts, and some of the discussions I had over lunch were very interesting indeed.    Schools are clearly at different points, and with different contexts, and this for me is fine, however if we wish to move forward I continue to believe in the need to work collaboratively and to share.    I came away from the event with new thoughts and ideas, and I hope those who attended my session came away the same.

TEISS, Infosec summit

Last week saw me attend the TEISS European Information Security summit down in London.  This is one of my annual journeys outside of the education bubble to look at cyber security, resilience and health in the broader industry and enterprise context.   I feel it is always important to try and seek diversity and to seek to avoid falling into the issues associated with existing purely within a silo, so stepping outside of my day to day on a regularish basis is a must.

More of the same, but greater volumes and speed.

If I was to summarise one of my main takeaways from the event, it would be that a lot of what I had heard was similar to what I had heard a year before.    Cybercrime continues to grow in terms of both threat and in terms of its potential impact.    The specific threats, such as ransomware, or social engineering, haven’t really changed but the frequency and speed of attacks has increased.    One particular slide looked at national state actors showing how some countries were now down to a breakout time, from compromise to exfiltration, of under 6 minutes.   Now it isn’t likely that schools will need to face nation state actors, albeit we could end up as collateral damage, however this increase in speed for nation state actors is likely mirrored for other threat actors, including those schools may actually face.      Related to this, one presenter showed screenshots of AI powered cybercrime tools which are now available, highlighting that AI, and in particular Large Language Models, not only have the potential to increase the productivity and efficiency of users, they also have the potential to increase the productivity and efficiency of criminals.   I was aware of FraudGPT and WormGPT so this wasn’t new to me however the subsequent slide provided showed an automation and orchestration platform which criminals could use.    The combination of AI powered creation tools alongside automation tools gives me concern as it would clearly give the criminals the ability to broadly launch convincing attacks but where any compromise can be quickly leveraged before defenders have an opportunity to react.   Think PowerAutomate for criminals.    Lots more, better phishing emails, where user errors are quickly capitalised on to deliver malware, extract data or propagate further attacks.

Geo-political instability

Discussion of the impact of geo-political instability and its impact on information security was very interesting especially in considering the room full of cyber security professionals charged with protecting companies and data, including companies responsible for critical national infrastructure.   From a school point of view, this might seem to be outside of our wheelhouse however on reflection I wonder about our need to educate students in relation to this.    We have already seen that modern warfare now involves a cyber element, with the cyber element often preceding any physical engagement.   Do students need to be aware of the implications of globally connected digital services in a world of increasing conflict along national and geographic borders?   How might these issues directly impact us, but also what about where we are indirectly impacted or where the impact is subtle manipulation via social media.   I suspect there is a whole post possible on this alone.

User awareness and training

I spent a significant part of the conference watching sessions within the Culture and Education stream.    There was some good discussion in relation to culture and testing of cyber resilience, particularly the use of phishing awareness testing.   These tests are very good at giving us a snapshot or even a longitudinal view as to our general cyber resilience, however they aren’t as useful at an individual user level.    To present a staff member or student with some additional training material to undertake following them falling for a phishing test, doesn’t find them are their best in terms of their potential to learn.     One presenter presented an alternative view suggesting that all users mean to do the right thing, so therefore we should be asking what it is that makes them do the wrong thing, rather than focusing on how we change individuals behaviour.    For me this very often comes down to being time poor and therefore being in a rush or suffering workload issues so I am not sure quite what we can do about this.   In my view, the world and our roles only see us adding more tasks and activities, and very seldom do we take things away, therefore it is no wonder that we are time poor and therefore no wonder that in our hurry we fall for social engineering and for phishing emails.  That said, it is definitely worth the conversation as to what the barriers to good cyber behaviours are and then looking to see if there is any way to address them.   I suspect we wont solve the issue, but I bet there will be some possible quick wins.

Recovery over prevention

One presenter made a very interesting observation that we continue to spend too much time focussed on prevention over spending time looking at how we might respond and recover from an incident.   I can immediately see why we might focus on prevention, as if a cyber incident doesn’t happen, then things are all good.   The reality however is that cyber incidents are almost guaranteed.    And if we accept that an incident is definitely going to happen at some point in the future then we are better spending a little less time focussed on prevention and a little more on considering what we will do when an incident does happen.    This can easily be done through desktop exercises, and doing so is always preferable to actually having to work it out when the world is on fire in the midst of a real cyber incident.   And to that end I actually delivered a little exercise only the other day.

People, Processes and Technology

One of the biggest takeaways from the event was the mention of People, Processes and Technology (PPT for short, and not the Microsoft App).    Sadly all to often we focus on Technology.   How can we technically keep data secure?  How can IT deliver training to those clicking a phishing link?   What we need to do more of is to consider the people involved and their impact, as well as the processes.   If we consider people, processes and technology we likely will have the best opportunity of keeping things secure and safe.   And I note, that considering people, processes and technology isn’t just an infosec thing, it can equally be applied to school technology strategy, to use of technology in classrooms, and much more.

I suspect as we continue to make use of more technology and as technology further pervades every aspect of our lives, we need to increasingly seek to look to the human contribution and to human behaviour, rather than getting so focussed on the tech.

AI: Time to give up pen and paper?

I have been reading the Experience Machine by Andy Clark off and on for quite a few months however the other day, on a trip down to London for an InfoSec event I found myself on a train to London, where once again I had an opportunity to do some reading.    It wasn’t long before I was reading Clark’s thoughts as to the extended mind and it got me thinking of the current discussion in relation to AI use in schools, and in particular by students for “cheating”.

Clark talks about how humans have sought to extend their capabilities through the use of tools, including both basic tools like the pencil as well as technological tools like devices and apps.   He makes the point that rather than just being a tool which is used, that the use of tools results in fundamental changes to our thinking processes, to our minds.   We have developed a species through our ability to use tools and to adjust our thinking processes around these tools, in order to do something more than we could prior to the use of the tools.

Taking this into the world of education, I have repeatedly talked about the JCQ guidance in relation to Non-Examined Assessments (NEAs) where it talks about making sure the work is the students own work.    Well, if we take Clarks comments, then the output produced by a student using the tools of a pen and paper, was shaped not just by the students but by the pen and paper they used.   The pen and paper shaped thinking processes, ordering and more, influencing what the student produced.  Maybe the sheet of paper will influence how much the student produces?   Maybe the difficult in erasing content written in pen will influence the students decision making as to whether to change or remove sections they have written.   So is it still the students own work?

Considering a different tool, this time a laptop in the hands of a student with exam concessions which facilitate their typing rather writing.   Again, I would agree with Clark that the tool, rather than being just a tool, changes the thinking processes.   With a laptop a student can more easily shift and reform their thoughts and ideas, moving paragraphs around and erasing or adding content as needed.  This means processes related to the ordering of content which might be needed when using pen and paper are no longer as important.   A student with a laptop might be more willing to take risks and explore their writing knowing they can easily change, add or edit, whereas a student with pen and paper may be a little more risk adverse, and therefore more creatively limited.

So now let me take a leap, and I suspect some will see it as a leap too far.    What if the tool rather than just pen and paper, is actually a generative AI solution.    The interactions with the AI, assuming the student has been taught to use AI and has developed the appropriate skills, will shape the students thinking processes.   Maybe the broad training data of the AI will result in the student considering aspects of the topic they may not have otherwise explored.   Maybe their language will change, making greater use of more academic language as a result of the academic content which makes up the AIs training data.  Maybe their language will be a bit more flowery and expressive than they might write without an AI tool.   As with the laptop, AI may make the student even more creative and less risk adverse, knowing they can easily edit but that they can easily get feedback and make iterative improvements.    Is this any less the students own work?

I need to be clear here, that I am not suggesting we just jump on the AI bandwagon without thinking.   We definitely need to consider the risks and challenges and to seek to find a path towards the ethical, responsible and safe use of AI schools.    But, we also need to acknowledge we now use many tools which we would not give up.   We would not give up the pen and pencil, the calculator, email and much more, and each of these is more than just a tool for use.    As we have become accustomed to use them they have changed how we as humans think and operate.   These tools have changed how our minds operate.    AI will do the same, and we need to think about it, but if our reason for not using AI is that it will change us, it is cheating or produces things which are not our own work or truly not representative of the real me, then does this mean that we need to give up all other tools including pen and paper and the written word?