Apple, governments, privacy and public good

Apple recently announced they are no longer providing Advanced Data Protection (ADP) for UK based customers in response to a request by the UK government.    ADP basically amounts to end to end encryption meaning only the user themselves can decrypt and access their data.    The press is largely carrying headlines focused on the negative impact on user privacy of this decision, either deriding Apple for reversing their long established position in relation to the privacy of user data or deriding the UK government for pushing Apple into this position.    And as always reporting tends to be very binary but the reality is things are a little more nuanced than that so I thought I would share my thoughts.

Removing ADP

So, what does this removal amount to?    Basically, in my reading of it, it amounts to the removal of encryption of your data at rest.  What this means is that your data continues to be encrypted in transit, so as it traverses the air, via 4G/5G or Wi-Fi, as it traverses the internet to its final destination being Apples servers.    So, a criminal, or another unscrupulous threat actor, intercepting data in transit will only get your data in its encrypted form and therefore be unable access it in its raw form.      The change comes at the point the data is stored on Apples servers.     Here, without ADP, the data will be stored in its unencrypted form allowing for Apple to access the data, or for Apple to share the data with law enforcement or other government entities, or for criminals to access the data should they gain access to apples servers.

So what does this mean for privacy?

The fact that the data is now unencrypted at rest amounts to a reduction in privacy and an increase in risk for individuals.   This is due to several reasons.   Firstly, an unscrupulous Apple employee could access your data, or maybe an Apple employee might be able to blackmailed or social engineered to give away data.    As Apple have the relevant encryption keys to decrypt your data, it may be that a criminal gains access to these and therefore is also able to decrypt your data having intercepted it in its encrypted form in transit.   And there is also the issue of unscrupulous governments using the same methods as the UK government to force Apple to remove end of end encryption and then demanding access to data in order to target dissidents or those who are vocal about the government, all under the guise of national defence or anti-terrorism.   Basically, your data without ADP is not as secure and private as it would be with ADP.

Why would anyone want to reduce privacy?

This all leads to the question of why the UK government would push Apple towards this decision.    The answer is one of national security and public good largely.     Privacy is a great thing however its benefits are felt by all and that includes terrorists, criminals, users sharing child sexual abuse materials (CSAM), etc.    With end-to-end encryption there would be no method for police or security services to investigate content as they simply wouldn’t be able to access it.  They would need to arrest the criminal end user and get them to unlock their device to be able to access content.    This would limit the potential for investigation to be carried out quietly in the background, which might also limit the potential for preventative measures as opposed to reactive measures.    And I note, when things do go wrong the press is quick to identify when people have been on watch lists, etc, but what use is a watch list if you have no way to actually see what users are actually doing?   Hindsight is 20/20 but with ADP enabled foresight would be encrypted.

Balance

The challenge here is we are trying to balance the risks to individual privacy, as experienced by all users in the UK in this instance, with the need to identify those who may seek to cause harm, distress or even death.    I don’t believe there is a perfect solution sadly.    It is about risk-based decision making.   

My belief is that the net impact of the removal of ADP is negative.   It impacts and increases risk for all users while those who the UK government may seek to monitor or discover will simply shift to using non-Apple services and devices, thereby meaning the gain from the removal of end-to-end encryption will be minor if any gain exists at all.    And additionally, the fact Apple have ceded to the request of the UK government will likely mean other governments will request the same, although for some the motivation may be more related to their own aims rather than anything related to public good or safety.

Conclusion

There is, in my view, an increasing level of friction between public good and personal privacy, with this particular issue related to Apples ADP service being the most recent and public example.    We sadly cannot have privacy, but only for some or at certain times.   Its privacy for all or for no-one, and where we opt for privacy for all we need to accept this will include those who seek to use privacy to cover illegal, immoral or unethical activities.     This news story also highlights the challenges related to national legislation of international companies.    In both cases, I think these are issues we should be discussing with our students as part of digital citizenship programmes, as these issues are only likely to grow in frequency.

Sadly the press pick a good news headline which is good for getting readership rather than conveying the more nuanced nature of the situation.   Maybe this also highlights the need for critical thinking skills to, so we can see through the black and white headlines, into the various shades of grey which are more representative of the real world.

AI and Coursework

Coursework continues to be a significant part of qualifications, whether this be GCSEs, A-Levels, or vocational qualifications like the BTec qualifications.   In BTecs coursework is the main assessment methodology, where this hasn’t changed that much since I actually had a hand in writing some BTec units and acting as a standards verifier.   The world around these qualifications, though, has changed particularly with the availability of Generative AI, so how do schools manage the use of AI by students, the requirements of examining bodies and the ethical need to ensure fairness in marking and assessment?

Firstly lets just accept students are using AI.   This is a statement which I myself have made and that I have heard others make.   The challenge is that we are often referring to ChatGPT, Gemini, Claude and the likes, and to things post November 2022.   The reality is that students were using AI prior to that.   They were using spellchecker, they were using grammar checkers and they were using google for searches.   Each of these involves AI.   AI isn’t new so let dispense with the concern regarding students using AI to cheat.

A students “own” work

So, when looking at coursework or NEAs (Non-examination assessments) JCQ states that work “an individual candidate submits for assessment is their own”.    At face value this makes sense but what constitutes the students “own” work.   This blog piece for example has seen AI highlight spelling errors which I have since corrected, plus I have had suggested alternative sentence structures and grammatical changes recommended, with AI behind these recommendations.    With these changes is it still my own work?     And in this case I am writing this directly from my thoughts rather than with a structure however if I asked AI for some help on the structure of the blog piece before writing would it still be mine?    Having completed I posted this on my site, but I could have fed it into AI for feedback and suggested improvements;  would the resultant blog post still be mine?   And how is this use of GenAI different from using spell check and grammar checker and the editor built into word?    In all cases it results in a piece of work which wasn’t what I originally typed, but is likely better.

Referencing: Why bother?

JCQ mentions that candidates must  not “use the internet without acknowledgment or attribution”.   Again, on face value this seems fair, but what about spellchecker and grammar checker.    In all my years I have never seen anyone reference Microsoft or Googles spelling and grammar checkers yet I am pretty sure they have almost always been used.    So why might Grammarly or ChatGPT or even the Editor in MS Word be different?     

And if we accept that students are using spellchecker, grammar checker and almost certainly using generative AI tools, surely they just end up noting they are using them which doesn’t seem to help from an assessors point of view.   With a traditional reference to a book an assessor could at least go and look it up, but when a student uses generative AI exactly how do I cross reference this.   And if I cant what is the value in the reference, and especially so if almost every student basically states they made some use of AI including generative AI.

Coursework: A proxy for learning

The challenge here is that we are using coursework as a proxy for testing a students learning, their knowledge and understanding.   It used to be that a piece of coursework was a good way to do this, then we got Google.   We now needed to check for unusual language, etc and then use Google itself to try and prove where students had plagiarised.  And more recently we have generative AI and things are a bit more difficult still.    We can no longer use Google to check the document for plagiarism and don’t get me started on AI detection solutions, as they simply don’t work.  

Maybe therefore we need to go back to basics and if in doubt speak to the student.   If we are unsure of the proxy, of coursework, then we need to find another way to cross check or to assess.   This could be a viva, asking the students to explain what they meant within sections of their coursework, or asking them to provide examples, or we could ask them to present rather than write their coursework.  In each case we get to assess the students confidence, body language, fluency, etc, in relation to the topic being assessed, rather than just what they have written down.   So maybe rather than seeking to block or detect AI use, we need to accept that we need to find new ways to assess.

A way forward?

A key starting point, in my view, with students in that of education.    Students need to know what AI is and understand what is acceptable in terms of AI use.    They need to understand the difference between using AI tools as an aid, such as spellchecker, grammar checkers and even generative AI, versus using it to do the work.    It might be fair to get help with my work in eliminating spelling errors.   It might also be fair to help me in better structuring my thoughts or my written words.    But it isn’t fair if the AI writes the piece of work for me and I just present it as my own but where there is no real effort on my part, no real sense of my views, in what is produced.   I suppose it’s a bit like discussing the work with a friend;   If we discuss the work which leads to a better result produced by me then this is good, but if my friend does the work for me then this isn’t.    But things are a little more nuanced than that sadly, so how much help is acceptable?

The challenge with the above is that some students will use AI correctly and some will, for various reasons, use it incorrectly or even dishonestly.   How will we tell?    I suspect some of this is down to professional judgement and knowing our students and some is audit tracking tools such as version history.    That said I think the easiest way for us to tell is to get to the root of learning and ask the students to explain that which they have submitted or at least part of it.    If it’s a good piece of work and they can explain it, then clearly they have learned the content and the work is representative.     If it’s a good piece of work and they cant explain it then it isn’t and therefore they shouldn’t get credit.

12 Years of Blogging

It was now 12 years yesterday that I posted my first ever blog post (see here).  Not sure where the time has gone but it has seen me move from the UAE and working with schools mainly in Abu Dhabi and Dubai, to working in Somerset at Millfield, but also working with the Digital Futures Group and Association of Network Managers in Education (ANME) trying to support schools across the UK and beyond.  

My first post involved me sat on the bed in the evening on the 12th Feb 2013 posting my first thoughts.   I am now 550 further posts on and my blog has afforded me the opportunity to share my thoughts, but also has forced me to structure my thoughts in order to write them down and has allowed me to keep a permanent record of how my thinking has progressed and changed over the intervening 12 years.   I think sometimes we aren’t as conscious of how our own views and beliefs change and develop over time and with age and experience as we should be.

It has also been great to meet people and connect with people that actually have read some of my posts.   This includes meeting online with discussion via social media, but also meeting in person at events including events such as BETT or the Schools and Academies Show (SAAS).    I continue to believe that networking and sharing is important, and if we take into consideration the pace of technological change and the potential, or even requirement, for the use of technology in schools, it becomes all the more important.    I keep coming back to the David Weinberger quote, “the smartest person in the room, is the room”, so I can but hope my posts continue to contribute to the room of global educators sharing online.

Here’s to continuing to post, to share and to the year ahead.  And for those thinking about creating a blog or posting or sharing my advice is simple:  just do it!

BETT 2025: Cyber resilience and schools

On the Friday afternoon of BETT 2025 I had the opportunity to deliver a session on cyber security for education, called “cyber resilience and schools: lets get pragmatic”.   Now I will admit I was a bit worried with it being a day three afternoon session, would anyone turn up, however the session was very well attended which was great.     One thing I will note though is that when I asked about the roles of the various people in the audience, around 95% of them were from technical IT roles.    I get why this would be the case however I worry that this is symptomatic of cyber incidents still being see as an “IT” issue rather than a school wide issue.   When an incident happens, although IT will be the people working hard to resolve it, it will be the whole school which is impacted including in relation to administrative tasks like registration and parental contact, teaching and learning, pastoral and wellbeing support and much more.    Cyber resilience, or cyber security if your prefer that term, needs to be seen as a school wide issue so my thanks and applause go to the small number of school leaders who attended my session, and I hope they found it useful.

My presentation broke down into four main areas, being the current context of schools and cyber security, the need for risk assessment, the need for incident preparation, and the basics which schools need to be doing to limit risk including reducing likelihood and impact of an incident.

In relation to the context it is pretty easy to see the impact and risk in relation to cyber and schools with one school being forced to remain shut at the start of the first week of BETT due to a cyber incident.   The ICO also acknowledged that reported incidents in 2023 had grown 55% over those in 2022.   If putting a cost figure to things, cyber crime world wide is estimated to reach $10.5 trillion dollars this year.   So cyber crime will definitely continue and will continue to hit schools.    One key challenge for schools though is the limited budget available, both financially and staff resource related, to tackle cyber risks and cyber resilience.    This highlights the challenge for schools however I noted a discussion in an industry event where they talked of whether doubling cyber related budgetary spend might half the risk;  The common consensus was probably not.    So, cries for more money, although money would help, would not solve the challenge.

It is therefore about risk management and balance.   Schools can be more secure but in doing so this might impact on flexibility, and therefore on the educational experience of students.    We need to seek to risk assess, identifying our risks, their likelihood and impact, plus the mitigation we could or have put in place, complete with any implications of such mitigation.   Once we know our risks we can plan accordingly in terms of mitigation or incident planning.

My next main point was the need to accept that cyber incidents are a “when” rather than an “if”, and based on this we need to prepare ourselves.    For me this is where desktop exercises are useful, actually working through an example incident with colleagues to identify what needs to be done, by who and when, plus to identify any assumptions which may have been made in terms of how an incident would be responded to.    Now this was one of the exercises from my session however the key value is in conducting such exercises in your own school, with a cross section of your own staff and therefore where the exercise can be tailored to the specific needs and context of the school.    It is all about thinking about the processes in a safe environment of a desktop exercise rather than in the heat of battle in the event of a real life incident.

The last section of my presentation, which may feel a little backwards in relation to having looked at risk management and incident planning first, was that of how we might pragmatically delay an incident occurring or limit its impact.    As I mentioned earlier we don’t have the resources of enterprise organizations so we cant simply throw money or resources at the problem.    For me this therefore means we need to seek to do the basics in terms of cyber resilience.    This refers to forcing MFA, patching as many servers as we can, providing users only with the access they truly need, etc.   It is these basics that will reduce the risk level for our school and college, and hopefully see criminals moving along to the next school or organisation in the hope of an easier target.   And generally the basic steps don’t cost the earth, other than some time to undertake them.

Conclusion

My summation for the session was very much about the need for cyber resilience to be seen as a school wide issue and therefore for it to be discussed at the highest levels including governors/trustees and senior leadership.    They need to have a sense on the risks being faced and guide in relation to seeking to address these risks.   They may not know the technical side however they set the risk appetite and therefore guide the spending of resources, including IT staffing, plus the balance between security and flexibility, which includes flexibility in the classroom.    They should also be central to considering the “what if” scenario and considering how the school might respond to cyber incidents such as data breaches, ransomware, etc.    It is better to prepare than to have to work out what you are going to do while in the midst of a cyber crisis.   And lastly is the basics, we simply need to do these as they are the most cost effective method to delay or limit the impact of a cyber incident.

Cyber crime isn’t going away, so we need to plan and prepare, and not just the IT staff. 

Now if you wish to review my slides or the resources, which included some cyber incident cards for a risk assessment exercise, then you can access them here via Google Drive.