KCSiE: Filtering and Monitoring

I was recently reviewing the new Keeping Children Safe in Education (KCSiE) update including the main changes which relate to filtering and monitoring.     I noted the specific reference to the need to “regularly review their effectiveness” and also the reference to the DfEs Digital Standards in relation to Filtering and Monitoring where it mentions “Checks should be undertaken from both a safeguarding and IT perspective.”   

The safeguarding perspective

From a safeguarding point of view I suspect the key consideration is whether filtering and monitoring, and the associated processes, keep students safe online.    So are the relevant websites or categories blocked and do relevant staff get alerts and reports which help in identifying unsafe online behaviours at an early stage, whether this is attempting to access blocked sites or in accessing sites which are accessible but considered a risk or indicator, and therefore specifically monitored and reported on.

From safeguarding perspective it is very much about the processes and how we find our about students accessing content which may be of concern, or attempting to access blocked content.   From here it is about what happens next and whether the holistic process from identification via fileting and monitoring, through reporting to responding is effective.   Are our processes effective.

The IT perspective

From an IT perspective, in my view, it is simply a case of whether the filtering and monitoring works.   Now I note here that no filtering and monitoring solution is fool-proof, so I believe it is important to acknowledge that there are unknown risks including new technologies to bypass filtering, use of bring your own network (BYON), etc.    Who would have thought a year ago about the risk of AI solutions to create inappropriate content or to allow students to bypass filtering solutions?

Having acknowledged that no solution is perfect, we then get to testing if our solution works.  Now one tool I have used for this is the checking service from SWGfL which can be accessed here.   It checks against 4 basic areas to see if filtering is working as it should.    

I however wanted to go a little further.   To do this I gathered a list of sites which I deemed as appropriate for filtering, gathering sites for each of the various categories we had considered.   I then put together a simple Python script which would attempt to access each site in turn before outputting whether it was successful or not to a CSV file for review.   The idea was that this script could be executed for different users and on different devices;  E.g. on school classroom computers, on school mobile devices, for different student year groups, etc.     The resultant response, if it matches our expectations for what should be allowed or blocked, allows us to evidence checking of filtering from an IT perspective, plus allows us to identify where there might be any issues and seek to address them.     

You can see the simple script below where it tests for social media site access;  You can simply add further URLs to the list to test them:


import requests

website_url = [

              “https://www.facebook.com”,

              “https://www.twitter.com”,

              “https://www.linkedin.com”

]

f = open(“TestResults.csv”, “w”)

for url in website_url:

              try:

                           request_response = requests.head(url)

                           status_code = request_response.status_code

                           website_is_up = status_code == 200

                           print(website_is_up)

                           f.write(url + “,Accessible” + “\n”)

              except Exception:

                           print(url + ” – Site blocked!”)

                           f.write(url + “,Site blocked!” + “\n”)

f.close()


Now the above may need to be changed depending on how your filtering solution works.   I did consider looking at the URL for our blocked page however as the above worked I didn’t have to.  My approach focused on the return codes however if you do need to work with the an error page URL I suspect this article may be of some help.

Conclusion

Before I used the script for the first time I made sure the DSL was aware;  I didn’t want to cause panic in a test student account which seemed to be hitting lots of inappropriate content over a short period of time, and in sequential order.    The script then provided me with an easy way to check that what I thought was blocked, was being blocked as expected.  As it turned out there were a few anomalies, some relating to settings changes and others to changes to websites and mis-categorisation.    As such, the script proved to be a little more useful than I had initially expected as I had assumed that things worked as I believed they did.  

The script could also be used to test monitoring, by hitting monitored websites and checking to see if the relevant alerts or reported log records are created.  

Hopefully the above is helpful in providing some additional evidence from an IT perspective as to whether filtering and monitoring works as it should.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment