Tech vendors should do more?

There is a lot of discussion in relation to how tech vendors and particularly big tech vendors need to do better, whether this is in relation to data protection, online safety, addressing fake news and many other considerations.    A recent presentation by Laura Knight at FutureShots24 where she spoke of the finite and infinite games, and of Simon Sinek’s book, “The infinite game”, got me thinking about this again.

Tech vendors need to sort it

Firstly it is important to acknowledge the benefits of technology;   The tools we have and use are there as they are useful and the tech companies that continue to operate are there as we as users choose to use their solutions, but there are also challenges and drawbacks associated with most technologies.    It is pretty clear that tech vendors need to do more to address the various challenges and risks which come about as a result of their products.    They provide a tool, whether it be a productivity suite, a social media application or a generative AI tool, among many others, with many people using these tools appropriately and for good, however, there are also then those who use these tools for ill, for criminal, unethical and immoral purposes.    Now I have blogged on this before, how tools are neither good or bad, but it is their use which is good or bad, however, the challenge is that through technology the resulting impact is magnified.   I have talked of a hammer as a tool, and how it could be used for assault, but unlike a hammer, a maliciously used social media tool can impact hundreds or thousands of people at once; the potential impact of the tools is much broader.   So, from this, it seems clear that tech vendors need to consider this negative impact and seek to mitigate the risk in the design of their platforms and through their processes.

The key here is that we are not really looking at these tools, but at their impact on wider society.   Society will continue, for good or for ill long into the future.   It is an infinite game.    Long after I am worm food, society will continue.   Likely long after many of these tech platforms have been and gone (think MySpace, Friends Reunited and the likes) society will continue.

And so, we look to rules and to laws to provide us with the frameworks and protections, where these rules and laws will exist long into the future, although they may evolve and be adjusted over time.    Sadly, though these laws and rules are designed for the long infinite game and therefore are slow to change, relying on established processes and methods not designed for the quick changing technological world we find ourselves in.  

With laws unable to keep up we find ourselves complaining that the tech vendors need to do more, and this is likely the case but the tech vendors know their time is limited as they may be dispatched to the bin should the next viral app come along, so they don’t want to expedite this through making a safer but less usable or less enjoyable or less attractive or addictive platform.   We have a problem!

But the tech companies are important

The tech companies are driven by profit as they are after all money-making companies with shareholders to answer to.   That said, many of the big tech companies do try to establish the moral and ethical principles by which they operate.    It is their drive for money which leads them to “move fast and break things”, to innovate and disrupt as they seek to find the next big thing and the corresponding profits which come with it.   And we need this innovation.   If we left innovation to governments, their processes, laws and rules would make the process of innovation so much slower than it is while it is in the hands of tech companies.  I suspect we would be still using 5 ¼” floppy discs at this point! 

The tech companies play the finite game, knowing that in this game there will be winners and losers so moving fast, disrupting and innovating is the only way to avoid being confined to the technology bin of history; think the polaroid camera, the mini-disc, and the platforms I mentioned earlier.    So, if the choice is spending longer to create a safer platform, but possibly being 2nd to the market with a product, or getting it out quickly and being 1st but then having to try and address issues later on, closing the gate after the horse has bolted, it seems pretty clear which the tech companies will choose.    Being 1st means survival while being 2nd might spell doom.

Solution?

I am not really sure that there is a solution here, or at least that there isn’t a perfect or near perfect solution.    Things will go wrong, and when they go wrong we will be able to highlight what could have or should have been done by tech vendors, governments or individuals to prevent the outcome.  But we have to remember we are dealing with technology tools operating at scale, and just take TikTok for example and its approx. 1 billion monthly users.    We haven’t yet banned cars but car accidents continue to happen!

Tech companies will continue to focus on the finite game and on maximising profit for their shareholders and on remaining viable, while politicians will also play the finite game, focussing on policies and proclamations which are more likely to be psotively received and to keep them in power, or help them to power.    But the world and society is an infinite game where what we do now may impact how things are for future generations.

I think we need to be pragmatic and I also think its about partnership and working together.  If governments, tech vendors and user groups can work together, discuss the benefits, the concerns and the issues, maybe we can make some progress.   Maybe we can find the best “reasonable” options and the “good enough”.     And I note, I feel some of this is already happening within some companies.     I suppose my one conclusion is simply that it isn’t for tech vendors to do more, it is for us all to do more, tech vendors, governments, schools, parents and adults more broadly, communities, and more.    And if we can do it, discuss and explore, find and test solutions together then maybe we can start to address some of the challenges.

KCSiE: Filtering and Monitoring

I was recently reviewing the new Keeping Children Safe in Education (KCSiE) update including the main changes which relate to filtering and monitoring.     I noted the specific reference to the need to “regularly review their effectiveness” and also the reference to the DfEs Digital Standards in relation to Filtering and Monitoring where it mentions “Checks should be undertaken from both a safeguarding and IT perspective.”   

The safeguarding perspective

From a safeguarding point of view I suspect the key consideration is whether filtering and monitoring, and the associated processes, keep students safe online.    So are the relevant websites or categories blocked and do relevant staff get alerts and reports which help in identifying unsafe online behaviours at an early stage, whether this is attempting to access blocked sites or in accessing sites which are accessible but considered a risk or indicator, and therefore specifically monitored and reported on.

From safeguarding perspective it is very much about the processes and how we find our about students accessing content which may be of concern, or attempting to access blocked content.   From here it is about what happens next and whether the holistic process from identification via fileting and monitoring, through reporting to responding is effective.   Are our processes effective.

The IT perspective

From an IT perspective, in my view, it is simply a case of whether the filtering and monitoring works.   Now I note here that no filtering and monitoring solution is fool-proof, so I believe it is important to acknowledge that there are unknown risks including new technologies to bypass filtering, use of bring your own network (BYON), etc.    Who would have thought a year ago about the risk of AI solutions to create inappropriate content or to allow students to bypass filtering solutions?

Having acknowledged that no solution is perfect, we then get to testing if our solution works.  Now one tool I have used for this is the checking service from SWGfL which can be accessed here.   It checks against 4 basic areas to see if filtering is working as it should.    

I however wanted to go a little further.   To do this I gathered a list of sites which I deemed as appropriate for filtering, gathering sites for each of the various categories we had considered.   I then put together a simple Python script which would attempt to access each site in turn before outputting whether it was successful or not to a CSV file for review.   The idea was that this script could be executed for different users and on different devices;  E.g. on school classroom computers, on school mobile devices, for different student year groups, etc.     The resultant response, if it matches our expectations for what should be allowed or blocked, allows us to evidence checking of filtering from an IT perspective, plus allows us to identify where there might be any issues and seek to address them.     

You can see the simple script below where it tests for social media site access;  You can simply add further URLs to the list to test them:


import requests

website_url = [

              “https://www.facebook.com”,

              “https://www.twitter.com”,

              “https://www.linkedin.com”

]

f = open(“TestResults.csv”, “w”)

for url in website_url:

              try:

                           request_response = requests.head(url)

                           status_code = request_response.status_code

                           website_is_up = status_code == 200

                           print(website_is_up)

                           f.write(url + “,Accessible” + “\n”)

              except Exception:

                           print(url + ” – Site blocked!”)

                           f.write(url + “,Site blocked!” + “\n”)

f.close()


Now the above may need to be changed depending on how your filtering solution works.   I did consider looking at the URL for our blocked page however as the above worked I didn’t have to.  My approach focused on the return codes however if you do need to work with the an error page URL I suspect this article may be of some help.

Conclusion

Before I used the script for the first time I made sure the DSL was aware;  I didn’t want to cause panic in a test student account which seemed to be hitting lots of inappropriate content over a short period of time, and in sequential order.    The script then provided me with an easy way to check that what I thought was blocked, was being blocked as expected.  As it turned out there were a few anomalies, some relating to settings changes and others to changes to websites and mis-categorisation.    As such, the script proved to be a little more useful than I had initially expected as I had assumed that things worked as I believed they did.  

The script could also be used to test monitoring, by hitting monitored websites and checking to see if the relevant alerts or reported log records are created.  

Hopefully the above is helpful in providing some additional evidence from an IT perspective as to whether filtering and monitoring works as it should.

Online Safety Bill

So, the online safety bill is once again back under consideration and already looking like its getting softer.    The proposed dropping of the “legal but harmful” clause being another example of a focus on individual privacy winning out over monitoring and filtering in the interests of public, and child, safety.  

Now I understand the challenge here of balancing individual privacy and public good.   Individual privacy is enshrined in the principles of basic human rights, yet we want our governments, intelligence services, police and even schools to be able to monitor and filter content to keep people safe and to proactively identify potential threats to the lives and wellbeing of those under their care.    These are opposing points on a continuum and each step made positively in one direction is usually at the expense of the other position.   More privacy means less ability to monitor/filter in the interests of public good.    More filtering/monitoring means less privacy and the risk of data being mis-used or leaked.

To me it is clear that there is a definite tendency towards individual privacy winning out in this argument.  Apple quietly dropping its plan to monitoring iCloud accounts for Child Sexual Abuse Material (CSAM) and now the UK government looking to remove the “legal but harmful” clause being two good examples of how privacy is winning.    I doubt this will change, at least for now, and especially as more and more organisations are seeing fines and reported issues as to how they are managing the data of individuals.   So, what is the solution in particular in relation to schools where online safety is such a key and important topic and issue?

I think the key here is in establishing very clearly the need for social media vendors to look after children using their platforms.    Maybe the “legal and harmful” clause is inappropriate when applied across the general population but surely we must be able to agree we need to protect our children and therefore identify some of the materials which might be legal yet harmful to them.   And it isnt just the content that is the issue, but the medium and the algorithms feeding the content.   Is it right to categorise a child, where children are more impressionable, and then field them a specific type of content constantly, based on trying to keep them hooked on an app?   Might this not shape their world view such that they see things as rather binary rather than the more nuanced and complex nature of the real world and real life?    Is it right to feed children almost constant streams of content, including potentially harmful content, or provide contact with unknown individuals?   We need to make the vendors consider the medium they providing along with their algorithms and the potential impact they have rather than just pointing to the content as the issue which needs dealt with.

I will admit I saw problems with the Online Safety Bill from the outset, and even more so given it was first proposed as a draft in May 2021, over 18 months ago;   In the technology world 18 months is a long time and a lot can happen so this highlights how legislation will always be playing catch up.    My original concerns, I will admit, were more on the technical side of things.   Privacy points towards end to end encryption and other security solutions which then hamper monitoring and filtering, plus there is the challenge that social media vendors cross geographic jurisdictions, where different governments may have different motives and ethical standards for the monitoring they may require or request.    Also any weakening of security and privacy may in turn increase the likelihood of cyber criminals gaining access to data. So my concerns were that, although the bill might be well meaning, it would be difficult or impossible to effectively implement.

That said, something needs to be in place and I think this is the point we have now got to, that we need to accept something imperfect as a starting point and then hopefully build from there.    I will also admit that the responsibility for online safety doesn’t just belong to the centralised provider of social media and other services, or to the centralised government of the nation within which a user resides.  When we talk of online safety and children, parents and guardians also have their part to play, as do school pastoral teams, form group tutors and teachers, friends and other members of a child’s wider social and family circle.   And maybe this focus on the online safety bill for a single answer may actually be having a negative impact in taking our eye of the need for a wider and collective effort to keep children safe.

I suspect the solution at this point is to get the online safety bill into law.  Its better than nothing and can add to the wider efforts required, and hopefully be seen as a step in the right direction rather than an endpoint.

Online Safety – Meta/SWGfL Event

This week included a little visit to the Meta offices in London for an SWGfL event focussed on online safety.   Now I decided to attend this event as I believe in the importance of online safety and in the wider issue of digital literacy or digital citizenship.   I am also highly conscious of the challenges from a technology point of view given the ongoing focus by technology vendors on individual privacy, including the use of encryption, over public good and online safety.It was also a great opportunity to bump into Abid Patel although he had to remind me as to the need for the obligatory selfie.

Digital Literacy

During the course of the event the term digital literacy was used which I take to mean similar to “normal” literacy, but in terms of digital media.   Now I don’t think this term goes far enough although I am happy for others to disagree with me on this.   For me digital literacy may cover the users use of technology and understand how and when, etc, but it doesn’t stretch to the issues of behaviour online and the online identities we develop as we post increasing amounts of content online.   As such my preference over the term “digital literacy” has always been a focus on “digital citizenship”, where digital literacy may form a part of this. It may seem a minor point, but for me it is an important point.

Being online

One message which was quite clear from the event was the extent that our students are now online.    The opening session quoted figures of 3hrs and 36mins as the average time spent online by 9-16yr olds.   If we assume 8hrs sleep, that’s over 20% of a child’s waking day spent online.   And for weekends the figure only increased, plus it was noted that children are increasingly “multi-screening” where they are using multiple devices such as a laptop and phone at once thereby allowing them to consume more content in less time.    From a risk point of view, the more content consumed the greater the risk of inappropriate or even harmful content being consumed.

Another similar statistic shared identified below 5% of internet users in 2003 as being under 18, yet now the figure standards at almost 40%.   A big jump, suggesting a clear trend, again highlighting how our children and students are now highly active online.  

Guidance and help

In relation to help dealing with living online it was noted that parents were viewed as the main source of help and support in relation to issues experienced online with teachers taking second place.    Unsurprisingly though a survey of teachers noted training and the ability to keep pace with technology being two barriers towards being able to properly support students online.    In relation to keeping pace with technology, I think we need to acknowledge that we can never really keep pace.    On reflection, I found myself more able to keep pace when I was a younger teacher than I am now; this may be age related however it could equally be technology related in that the pace of tech change is now quicker than it was when I was younger.    I think here the importance isn’t necessarily knowing the answers but about being open about not knowing the answers and accepting that the discussion with student may itself have value.

In terms of training this makes me think of a poster in my office regarding students never asking for professional development, or training, on using technology.   Now I will note this statement is overly simplistic but aimed to get across a point regarding the massive number of resources and help available online plus the increasingly intuitive nature of [simple] apps.  Maybe we need to be more willing to “just Google it” in relation to technology?   That aside, the issue with training is where is it going to fit into the already busy curriculum and crowded workload of todays teachers?    Surely it cannot be yet another thing added, and who every subtracts, from workload?   I don’t have an answer to this one however I think the topic needs to become something regularly discussed in staff rooms, insets, assemblies, etc.  It needs to become part of culture however with this I recognise it may take time for this change to occur, at a time when technology changes occur so much faster.    So, for now, for me, I am regularly trying to prompt discussions and thinking in relation to digital citizenship just by doing simple things such as highlighting news stories in our school weekly bulletin.   The individual effect is low however my hope is that over time it will build awareness and discussion.

Conclusion

The event had a fair few points of interest and things I could take away.  Far more than I have outlined above. I had hoped that it might help and answer the challenge of balancing out the need to protect students with the prevailing narrative regarding the importance of individual privacy.   Sadly, I don’t think the event provided any real answers in this area beyond some evidence that Meta are partnering with organisations to help to address the problem, and that efforts are being made.   Are these efforts enough?   Am not sure there will ever be enough effort as any single loss of life or significant impact on the life of young person will aways be considered sufficient evidence that more could have been done.    The fact Meta are supportive of a programme allowing individuals, including children, to log a fingerprint of non-consensual intimate imagery such that it can be automatically quarantined and even removed is good news.   I actually find this interesting given Apple seem to have allowed their proposal of scanning for Child Sexual Abuse Material (CSAM) to quietly disappear from discussion. So maybe there is progress being made after all?

It was a useful event.   The more we can discuss the challenges the more they evident and the greater chance we can seek to manage and mitigate them together. And this is another takeaway, that the event marked a number of individuals and organisations coming together to discuss the issue; This needs to continue and grow in frequency. 

Thoughts on Safer Internet Day

This week included Safer Internet Day, the 8th of February, with a lot of additional posts on internet safety making their way onto social media.   I think safer internet day is great to sign post resources, focus thinking and share thought and ideas regarding online safety, however equally I worry that it becomes a single shot deal.  I worry that it signifies the 1 day a year when online safety receives a focus.

I have recently tended to focus on the cyber security aspect of online safety in particular, talking to students about securing their accounts, data breaches, etc.   This has largely been due to my interest in this particular area and a feeling that this area is sometimes neglected or is believed covered through a discussion of what makes a strong password.  I think that students have found our discussions useful however I wonder about the overall impact where these discussions happen infrequently.     Students may listen intently, engage and even contribute, but once they return to their daily lessons and the daily requirements of study, homework, etc, I feel that the discussion of cyber security and the concepts raised may largely become lost in the sea of other information and priorities.   When they next pick up their device, or sign up to a new online service do they give thought to the presentation they received, or do they simply repeat their previous behaviours and sign up with little consideration for online safety?

One of the big challenges is how we fit digital citizenship, online safety and cyber security into the available time such that it occurs regularly.   With ever increasing curriculum requirements the available time is only shrinking, and I note that seldom do we see net impact of curriculum changes resulting in less things to cover.    As we use more technology in our schools, as our students use more technology in their education, but also in their day to day lives, surely, we need to spend more time discussing the risks, as well as the benefits.   Surely, we need to spend more time looking at how we manage ourselves in a digital world, how we manage our online identity and our personal data.   But where is this time coming from?

And this is the crunch;  Safer Internet day, which I have already acknowledged I like, may highlight the limitation of our current approach to online safety.    It feels tacked on, an additional item, rather than something core, something truly important.    We might run presentations or get guest speakers in, but all this really does is tick a compliance box.   To truly cover online safety we need something more embedded, something which is ongoing throughout a students time in schools or colleges, we need to develop a culture of online safety.   We ideally need everyone modelling behaviours which represent good online safety, whether this is the teachers or the students.   We also need poor behaviours to be challenged and questioned.

Developing organisational culture is a long term and slow process, which in my experience is often the sum of lots of little actions taken across an organisation, which adds up to a statement of “how we do things around here”.   As we use greater use of technology, we need to be increasingly focussed on making sure our usage of technology is “safe”.   

But technology, unlike culture, moves quickly so we have no time to waste.   I think we all need to ask ourselves, what is the online safety culture like in our school and how can we develop it, how can we make sure it equips students with the knowledge and skills they need in this increasingly digital world.

Online Safety: Another challenge

Keeping students safe in a world of technology, and where students are spending increasing time engaging with technology, and even learning via technology, is very important.    As I have written in the past, this is also becoming increasingly difficult.   Back in March 2021 I wrote about how internet filtering, something that was easy when I started out on my teaching career, is now far from easy and verging on no longer possible (Internet Filtering, March 2021).    As such, I suggested that internet filtering can now no longer be considered as a distinct action schools should take in terms of safeguarding, instead needing to be treated as one part of a larger process encompassing a number of stakeholders and actions, all taking within a risk management, rather than compliance framework.

In June I re-emphasised the above in my post, Keeping students safe in a digital world.   This time my focus was on Virtual Private Networks (VPNs) and the implication of students being exposed to TV marketing on the use of VPNs to maintain privacy.  My concern was that this would drive some students to using free VPNs where the safety and security of data may not be as certain as the apps suggest.  It would also serve to make it more difficult for schools to monitor student online activity in the interests of safeguarding.

Since the above June post Apple have held their Developer Conference.   Apple, like a number of other device or software vendors are being very “privacy” focussed following recent high publicised incidents around the privacy of user data and some very well known services.   With this, Apple decided to announce iCloud+ and their Private Relay functionality built into the iOS and providing VPN like functionality when browsing within Safari.    This means “baked in” VPN functionality provided at the operating system level, on Apple Devices such as the iPad which are widely used in schools.   Yet another challenge for online safety. Private Relay, a great facility for privacy but yet another blow for school IT and safeguarding teams seeking to keep students safe online.   Now my hope is that there will be some ability to control this functionality using a Mobile Device Management (MDM) solution however for now this isnt possible, and I suspect it may only be possible on “supervised” devices rather than on Bring Your Own Device (BYOD) Apple devices.   Only time will tell.

I often refer to a continuum, when speaking to sixth form students, existing between individual privacy on one side and public good and safeguarding as items on the other side.    So for schools this is the privacy of the individual student versus the schools responsibility to keep students safe, and therefore to monitor and filter online activity.  Currently the pendulum continues to move further towards the individual privacy side.    I wonder if this will continue or if we will eventually see some balance restored.   I also wonder whether, given the increasing ineffectiveness of the technical measures schools can put in place, do the guidelines in relation to safeguarding students online need to be re-examined.

Keeping students safe in a digital world

It is becoming increasing challenging for schools to keep students safe in a digital world.   This is largely due to the easy with which students can make use of solutions designed with privacy in mind.  These technologies weren’t designed with the safeguarding needs of schools in mind.   As a result, I believe we need to be increasingly pragmatic about our approach.

One big factor in keeping students safe relates to whether the devices being used belong to the students and their parents or belong to the school.    Where they belong to the school there is greater potential to use technology solutions to help keep students safe, however these same solutions can easily be circumvented or removed where the devices belong to the students, e.g. where a Bring Your Own Device scheme is in place.    Personally, I suspect we will only see BYOD growing in terms of how common it is in schools.    It is also important to note that students will bring their own devices to school irrespective, likely in the form of personal mobile phones, therefore protections in place of school issued devices are rather limited in their effect given students can simply switch to their personal mobile phone should they not wish to be filtered or monitored.

The big reason for writing this post comes following reading a post where it recommended advising students to make use of VPNs in order to keep their communications safe and secure.   From a cyber security point of view I can understand this.   Using a VPN will stop someone snooping on my personal data in transit.   When thinking about it a bit more broadly however I think it would be a bad idea.   Firstly, it would hamper school filtering and monitoring, which is in place largely for safeguarding reasons.   Also, although there are very good VPNs available, these tend to be paid services.  Parents and students are unlikely to want or possibly even be able to afford to spend money on these services, which will therefore push them towards the various free VPNs which appear so readily available.   These free VPNs may either be fully malicious in nature, not being a VPN at all or may be gathering and selling user data.    Either way I am not sure if the cure, in a free VPN, is any better than the risk.

I think schools must now look to tackle safeguarding in a digital world in a more holistic way.   Its not down to the safeguarding and pastoral team to define filtering of sites, or access times for students, nor is it down to the IT dept. to make sure firewalls and filtering are in place.  It needs to be a collective approach where all involved discuss the risks and what they have in place, and what they can put in place going forward.    Within this, I continue to believe the principle focus needs to be on awareness rather than seeking technology solutions, ensuring students, teachers and parents are all aware of the benefits and risks of technology use, plus aware of how to keep themselves safe and secure online.

As privacy online continues to grow in focus, and as technology companies increasing bake privacy and security into their solutions, the act of keep students safe in a digital world will only continue to become more challenging.

EdTech Summit, Brighton

I had the opportunity to present at the Brighton ISC Digital EdTech summit during the week.  My talk, “Common Sense Safeguarding” focussed on the need for schools to take a broad and more risk based view of online safety as opposed to the previous more compliance driven approach.    Given the number and range of technologies students have access to and also the tools available to bypass protective measures put in place by a school, or even the ability to negate them totally through using 4G, online safety is no longer as simple as it once was.    This therefore needs a broader view to be taken.

In addition, I identified that in our dealing with Online Safety we are not yet effectively addressing the issues which are growing with our increasing use of digital resources and services.    Cyber security, big data, profiling, artificial intelligence and bias, ethics of IT systems and similar broad topics don’t yet have a key place in the general curriculum albeit opportunities exist across different subjects.    We need to ensure these issues are discussed with all students.   It was to that end that I proposed a cross school discussion group focussed on Digital Citizenship.

Overall my view is one that we need to be more aware of the limitation of preventative measures such as web filtering plus need to focus more on user awareness and having discussions with students regarding the wider implications of staying safe and being successful in a digital world.

If you are interested in being part of a group of schools discussing Digital Citizenship please fill out this Microsoft Form and to access my slides from the EdTech Summit please click here.

Digital Citizenship Questions

I think it is so important that schools ensure that discussions in relation to living in the digital world are encouraged throughout the school.   It is only through discussing the positives and negatives of the increasing digital lives we live that we can prepare our students for the world they live in and the world yet to come.

To that end I recently started creating some slides with questions to be used as a stimulus in discussing digital citizenship.

Here is my first set of slides:  Some digital citizenship questions.  I do hope you find them useful and please do let me know your thoughts and any suggestions as to how I can build on or improve them.

 

Keeping students safe when the dark web is so easily accessible.

I just heard about software to allow the easy setup of a website on the Dark Web with little technical knowledge required and no costs other than the requirement of an internet connection.  Simple, easy and instantly anonymous.

Maintaining the safety of students online is a key part of a school’s overall efforts to safeguard students.   When I first entered teaching, this was relatively straight forward.   Students only access to devices in schools was likely to be the PCs in the computer suites where they had limited ability to make changes due to not having administrative access.   In addition, the school would have internet filtering in place to protect the students, where the students main tendency was to seek out games as opposed to any other inappropriate content.   I remember as the ICT teacher in one school, regularly having a look at the schools internet statistics and reviewing the most commonly hit sites for signs of games or other inappropriate content.   It was normally games I would find and therefore games I would block.    For those students who decided they wanted to bypass the schools restrictions the tools available were limited and the required knowledge to make them work was often greater than that which the majority of students possessed.

Fast forward around 15 years, to today, and the students are more aware of the content which is available on the internet, plus the search tools are better.  As such I suspect it is no longer games which are the most prevalent inappropriate website category in schools.     In addition, in many schools, students now come to school with their own device, either a device required by the school or a mobile phone.   The tools available to bypass school restrictions are now easily accessible, numerous and also easy to use.   These tools often aimed at supporting the right to privacy can easily be used for other purposes such as hiding malicious or inappropriate online activity.   I note for example how VPN providers can now be seen advertising their products on TV or heard on the radio.    In the last couple of days, as mentioned at the start of this post, I have also heard of the easy availability of software aimed at allowing individuals to setup websites on the dark web to anonymously share content without fear of it being traced back.

The technical solutions of the past, filtering and monitoring, are no longer sufficient as simply put, monitoring and filtering doesn’t work.    This isn’t just a school problem, this is a societal issue.   The societal issue is beyond the scope of this post however within schools we cannot sit idly by, we need to take action.   We need to take a wide view of online safety which with the removal of the ICT curriculum, somewhere these issues were often discussed and explored with students, has become increasingly difficult.   Time needs to be found to explore the issues around living in a digital world, to explore online safety, ethics, privacy, security, etc however sadly for now I am not sure where there is space for this in the already packed curriculum.    Given this, for me, all schools need to ask themselves what they do in relation to online safety, and what more could they do?   This is a question that should be asked at a senior level.   It is also important that schools get together, not just to share good practice but to collectively work together to ensure we strike a balance between preparing students for the technological world and keeping them safe.  We are all in the same boat and therefore maybe we need to find a collective approach to a collective problem.