Online Safety Bill

So, the online safety bill is once again back under consideration and already looking like its getting softer.    The proposed dropping of the “legal but harmful” clause being another example of a focus on individual privacy winning out over monitoring and filtering in the interests of public, and child, safety.  

Now I understand the challenge here of balancing individual privacy and public good.   Individual privacy is enshrined in the principles of basic human rights, yet we want our governments, intelligence services, police and even schools to be able to monitor and filter content to keep people safe and to proactively identify potential threats to the lives and wellbeing of those under their care.    These are opposing points on a continuum and each step made positively in one direction is usually at the expense of the other position.   More privacy means less ability to monitor/filter in the interests of public good.    More filtering/monitoring means less privacy and the risk of data being mis-used or leaked.

To me it is clear that there is a definite tendency towards individual privacy winning out in this argument.  Apple quietly dropping its plan to monitoring iCloud accounts for Child Sexual Abuse Material (CSAM) and now the UK government looking to remove the “legal but harmful” clause being two good examples of how privacy is winning.    I doubt this will change, at least for now, and especially as more and more organisations are seeing fines and reported issues as to how they are managing the data of individuals.   So, what is the solution in particular in relation to schools where online safety is such a key and important topic and issue?

I think the key here is in establishing very clearly the need for social media vendors to look after children using their platforms.    Maybe the “legal and harmful” clause is inappropriate when applied across the general population but surely we must be able to agree we need to protect our children and therefore identify some of the materials which might be legal yet harmful to them.   And it isnt just the content that is the issue, but the medium and the algorithms feeding the content.   Is it right to categorise a child, where children are more impressionable, and then field them a specific type of content constantly, based on trying to keep them hooked on an app?   Might this not shape their world view such that they see things as rather binary rather than the more nuanced and complex nature of the real world and real life?    Is it right to feed children almost constant streams of content, including potentially harmful content, or provide contact with unknown individuals?   We need to make the vendors consider the medium they providing along with their algorithms and the potential impact they have rather than just pointing to the content as the issue which needs dealt with.

I will admit I saw problems with the Online Safety Bill from the outset, and even more so given it was first proposed as a draft in May 2021, over 18 months ago;   In the technology world 18 months is a long time and a lot can happen so this highlights how legislation will always be playing catch up.    My original concerns, I will admit, were more on the technical side of things.   Privacy points towards end to end encryption and other security solutions which then hamper monitoring and filtering, plus there is the challenge that social media vendors cross geographic jurisdictions, where different governments may have different motives and ethical standards for the monitoring they may require or request.    Also any weakening of security and privacy may in turn increase the likelihood of cyber criminals gaining access to data. So my concerns were that, although the bill might be well meaning, it would be difficult or impossible to effectively implement.

That said, something needs to be in place and I think this is the point we have now got to, that we need to accept something imperfect as a starting point and then hopefully build from there.    I will also admit that the responsibility for online safety doesn’t just belong to the centralised provider of social media and other services, or to the centralised government of the nation within which a user resides.  When we talk of online safety and children, parents and guardians also have their part to play, as do school pastoral teams, form group tutors and teachers, friends and other members of a child’s wider social and family circle.   And maybe this focus on the online safety bill for a single answer may actually be having a negative impact in taking our eye of the need for a wider and collective effort to keep children safe.

I suspect the solution at this point is to get the online safety bill into law.  Its better than nothing and can add to the wider efforts required, and hopefully be seen as a step in the right direction rather than an endpoint.

Big Tech and balance?

Within the technology space there are now a small number of hugely powerful players.   These players, including Microsoft, Google, Amazon, Meta (previously Facebook) and Apple, are now so dominant that their impact is felt beyond the technology space.   With this comes some advantages, but as I have often written, we live in a world of balances, and therefore there are also some potential risks or drawbacks.

Writing this blog piece came as a result of reading an article in relation to Sony and the PlayStation brand, a large and powerful player within the gaming space, where they are being sued in relation to breaching anti-competition laws, using their powerful position to apply pressure to games developers and publishers which then drive up game prices and therefore profits.  You can read the article here.    We have previously seen similar lawsuits levelled against both Google and Amazon in relation to their shopping platforms either favouring suppliers or brands based on their relationship with Google/Amazon or favouring their own brand products in the case of Amazon.    In the case of Google/Amazon the concern relates to their power resulting from providing the search functionality for users while also either providing products themselves or providing advertising services to brands/suppliers.  

And this isn’t the only risk in relation to these big players.    In the case of Microsoft, Google, Amazon and Apple, they store our data for us in the case of Google Drive, OneDrive or iCloud.    Where this is free storage, this is convenient for us, but if we aren’t paying for the service how are the ongoing costs being covered?Recently France suggested that schools not use the free services of Google or Microsoft for this reason.

It may be that in using their services for search or for purchasing items or for music, etc, they gather data about us.   So as the large players, that most of us will have regular interactions with, they will be gathering huge amounts of data about us which they then can use to profile and predict our behaviours.    Now we might accept that they do this for good reasons such as improving their services, etc, however if we believe that some of their corporate practices have been questioned it may also be reasonable to consider that they could seek to misuse this data.    And in the case of those services supported by advertising revenue it would be easy to see how they might use the data to influence our decision making and that’s before you consider the possibility of these services, themselves, suffering a data breach resulting in all this data being leaked onto the public internet.

There is also the issue of truth;  In the case of Google and Facebook, which allow users to access the news and other current affairs information, they control the information they present to users.   How do we know that they are presenting the “right” information?    (I note that establishing what is “right” or “the truth” is a problem in itself, however is outside the scope of this short post)   How confident are we that the information being presented to us is absent of bias?   Do the algorithms present sufficiently broad viewpoints or just present a singular viewpoint, that which the algorithm thinks we want to hear? In trying to keep us engaged with the platform do the algorithms tend to only present viewpoints we are likely to agree with, thereby creating echo chambers and online binary arguments?

The significant issue here is the fact that we havent been through this kind of technological change ever before in history.   Yes, we had the invention of the printing press, of radio and of TV, but these didn’t impact on society with quite the same pace of change as the combination of smart phones, internet access and social media.    And the difference in pace of change is so easily observed in the rate of adoption with the TV taking 22yrs to reach 25% of market access while Facebook only took 2 years.   We are now in a situation where so many of us are carrying an internet enabled device in our pockets, and regularly interacting with apps, including search and social media, where these apps and their underlying algorithms are constantly gathering data in order to hone and adjust the content which they serve us with.  

Now I know when I talk to students they don’t want to give up the convenience of google search or amazon for shopping, or the interesting content, including that from friends and family, provided by TikTok, Instagram, Snapchat, etc.   I will admit I am equally reluctant and would find not having google and twitter difficult.  

So what is the answer?    

Well I think the answer is simply to discuss and acknowledge that these services and the vendors that provide them, Google, Amazon, Apple, Microsoft, Meta, etc, provide us beneficial solutions, however in most things there is a balance.   We need to be aware of this balance, we need to discuss this balance with students such that they know the drawbacks and risks associated with the vendors and solutions we now so commonly use.It may be that our current technology revolution resolves itself much like TV, radio and the printing press of the past, however in case it doesn’t, I think we need to develop our overall awareness of the risks.

Online Safety – Meta/SWGfL Event

This week included a little visit to the Meta offices in London for an SWGfL event focussed on online safety.   Now I decided to attend this event as I believe in the importance of online safety and in the wider issue of digital literacy or digital citizenship.   I am also highly conscious of the challenges from a technology point of view given the ongoing focus by technology vendors on individual privacy, including the use of encryption, over public good and online safety.It was also a great opportunity to bump into Abid Patel although he had to remind me as to the need for the obligatory selfie.

Digital Literacy

During the course of the event the term digital literacy was used which I take to mean similar to “normal” literacy, but in terms of digital media.   Now I don’t think this term goes far enough although I am happy for others to disagree with me on this.   For me digital literacy may cover the users use of technology and understand how and when, etc, but it doesn’t stretch to the issues of behaviour online and the online identities we develop as we post increasing amounts of content online.   As such my preference over the term “digital literacy” has always been a focus on “digital citizenship”, where digital literacy may form a part of this. It may seem a minor point, but for me it is an important point.

Being online

One message which was quite clear from the event was the extent that our students are now online.    The opening session quoted figures of 3hrs and 36mins as the average time spent online by 9-16yr olds.   If we assume 8hrs sleep, that’s over 20% of a child’s waking day spent online.   And for weekends the figure only increased, plus it was noted that children are increasingly “multi-screening” where they are using multiple devices such as a laptop and phone at once thereby allowing them to consume more content in less time.    From a risk point of view, the more content consumed the greater the risk of inappropriate or even harmful content being consumed.

Another similar statistic shared identified below 5% of internet users in 2003 as being under 18, yet now the figure standards at almost 40%.   A big jump, suggesting a clear trend, again highlighting how our children and students are now highly active online.  

Guidance and help

In relation to help dealing with living online it was noted that parents were viewed as the main source of help and support in relation to issues experienced online with teachers taking second place.    Unsurprisingly though a survey of teachers noted training and the ability to keep pace with technology being two barriers towards being able to properly support students online.    In relation to keeping pace with technology, I think we need to acknowledge that we can never really keep pace.    On reflection, I found myself more able to keep pace when I was a younger teacher than I am now; this may be age related however it could equally be technology related in that the pace of tech change is now quicker than it was when I was younger.    I think here the importance isn’t necessarily knowing the answers but about being open about not knowing the answers and accepting that the discussion with student may itself have value.

In terms of training this makes me think of a poster in my office regarding students never asking for professional development, or training, on using technology.   Now I will note this statement is overly simplistic but aimed to get across a point regarding the massive number of resources and help available online plus the increasingly intuitive nature of [simple] apps.  Maybe we need to be more willing to “just Google it” in relation to technology?   That aside, the issue with training is where is it going to fit into the already busy curriculum and crowded workload of todays teachers?    Surely it cannot be yet another thing added, and who every subtracts, from workload?   I don’t have an answer to this one however I think the topic needs to become something regularly discussed in staff rooms, insets, assemblies, etc.  It needs to become part of culture however with this I recognise it may take time for this change to occur, at a time when technology changes occur so much faster.    So, for now, for me, I am regularly trying to prompt discussions and thinking in relation to digital citizenship just by doing simple things such as highlighting news stories in our school weekly bulletin.   The individual effect is low however my hope is that over time it will build awareness and discussion.

Conclusion

The event had a fair few points of interest and things I could take away.  Far more than I have outlined above. I had hoped that it might help and answer the challenge of balancing out the need to protect students with the prevailing narrative regarding the importance of individual privacy.   Sadly, I don’t think the event provided any real answers in this area beyond some evidence that Meta are partnering with organisations to help to address the problem, and that efforts are being made.   Are these efforts enough?   Am not sure there will ever be enough effort as any single loss of life or significant impact on the life of young person will aways be considered sufficient evidence that more could have been done.    The fact Meta are supportive of a programme allowing individuals, including children, to log a fingerprint of non-consensual intimate imagery such that it can be automatically quarantined and even removed is good news.   I actually find this interesting given Apple seem to have allowed their proposal of scanning for Child Sexual Abuse Material (CSAM) to quietly disappear from discussion. So maybe there is progress being made after all?

It was a useful event.   The more we can discuss the challenges the more they evident and the greater chance we can seek to manage and mitigate them together. And this is another takeaway, that the event marked a number of individuals and organisations coming together to discuss the issue; This needs to continue and grow in frequency. 

Online safety: are we mitigating the risks?

I think few would argue that the online safety risks which students are exposed to these days have gone down.   But the big question is, has the effort of schools in protecting students changed in step with increased risk exposure?

But first some good news

Before I go any further, I need to be clear here that this post looks very much at the negative side of things in relation to online safety however in doing so I run the risk of painting a purely negative picture.   I therefore think its important to point out the positives of technology.    Communication, collaboration, friendship and many more areas of life can see a benefit from appropriate use of online technologies.   An Ofcom 2022 report identified that 80% of the children surveyed used online services to find support for the wellbeing, that 53% felt being online was good for their mental health and that 69% of children thought being online helped then feel close to their friends and peers.   It is important that we appreciate these positives as for me this highlights the focus should not be about blocking and filtering, which is increasing ineffective, but about discussion and engagement of students around risks and behaviours.

New Apps and Technologies

And now for the risks;    I would suggest most students now have mobile phones with internet access, with access to apps such as snapchat, Instagram and the very popular TikTok.   The Ofcom survey found that 90% of children owned a mobile phone by the time they reached the age of 11.   This access to technology and every changing and evolving app space represents a risk in the explosion of inappropriate content and contacts which students can access via the device in their pockets.   As adults and educators we cannot truly know the implications, and this is important to acknowledge, as the situation when we were children was significantly different.   There is also a risk here in relation to the increasing use of AI or machine learning within apps to feed users with the content they appear interested in, reinforcing these interests or curiosities even when exposure to such content may be inappropriate or even dangerous.

Pandemic

The pandemic accelerated things pushing everyone more online than ever before as we had to learn through online contact with teachers, maintain relationships with friends and families again through online solutions and occupy our time without leaving our homes, an issue which online games and other platforms where all too happy to address.   It wasn’t so much a case of “should we” engage with technology, online tools and online spaces, but a case of what other choice do we have.   This has both increased the need to use and also the use of technology, including all its benefits but also risks.

IT Curriculum

We have also seen a decrease in time in schools where digital citizenship, its risks and issues can be discussed.   Yes online safety should appear across the curriculum and as part of keeping children safe in education, however there are lots of other competing topics and requirements.  Previously the GCSE IT provided an opportunity for specific time to be allocated to discussions of digital citizenship and online safety however with its removal this opportunity has been lost.   Now some may say the Computer Science GCSE is still available, however it doesn’t have the same number of students studying for it plus as a subject has a decidedly different slant than the old GCSE IT, which doesn’t lend itself to quite as much discussion of digital citizenship.   Now I will note the GCSE IT wasn’t without its problems as a course, however I feel a redesign would have helped rather than its removal.  Looking forward, I see similar risk of lost opportunity in the planned defunding of the BTec qualifications which include a number of IT qualifications.

Conclusions

I think all schools will likely be able to point to what they do in relation to online safety.   My concern though is this hasn’t changed much over the years.    Celebration of internet safety day, annual talks or presentations, digital councils and/or digital leaders meetings involving students, etc, these are not new, yet the risks and exposure of our students to technology and these risks has grown significantly, and even more so over the last two or three years, driven by the pandemic.   The risks are growing yet the mitigation measures largely remain the same.  There is a clear inbalance.

I think one of the biggest challenges continues to be time.   The curriculum is already full of content and various competing requirements, with most offering value.   The question therefore is one of identifying where there is the greatest value and I would advocate that time allocated to digital citizenship is critical.   The challenge here is I don’t feel education is particularly good at this prioritisation, instead trying to do everything, and in doing so this causes workload issues, greatly subdivided focus and other issues.

Technology use is only going to increase so the more we can prepare our students, and get them to evaluate and consider how, when and where they use technology, the better.     Digital citizenship needs to occupy a bigger part of student studies, both in preparing them for the future, but also equipping them to deal with technology risks both now and in the future.

Deleting TikTok (again!)

I recently deleted TikTok from my phone for what I think is the third time.   The issue is I find myself rather hopelessly flicking through the videos, particularly the funny pet videos and comedy videos.   Now normally it is in a moment of spare time that I think it is worth having a look at TikTok, however time then seems to fade away as I get engrossed and the couple of minutes of video viewing turns into 30mins or more.

Why does this happen?

I am not a psychologist or sociologist or other “ist” who can provide a scientific theory on this but I would like to share my own ideas on why this happens.    Firstly, I think part of it is the multi-sensory nature of TikTok, with visual and audio content from the videos themselves, combined with the tactile nature of flicking through the videos.   I also think the act of flicking through the videos helps to keep people engaged due to requiring user action.

There is also the very purposely designed short nature of the videos, often with a conclusion or series of amusing events.    The short nature of videos limits the requirements for focus or concentration, while the conclusion is likely to deliver the fun or pleasurable moment at the end of the video. 

So limited amount of focus needed combined with near instant gratification, or I engage and swipe onto the next video.  Basically TikToks design is to be addictive and habit forming, offering little cognitive load but delivering enjoyment at the conclusion of every little short video.

Why delete it?

There is a lot of talk of how social media companies should be responsible and look at how addictive their platforms are especially for younger users however I also think we all as individuals need to also take some responsibility.     In my case my approach to this is to delete the app as I cant trust myself to use TikTok sparingly.    I could alternatively make us of app timers or similar to limit my usage of the app to a certain amount of time per day, however given the overall value of TikTok to me I have decided that this isnt appropriate.  I will however note, I suspect young users will find such an act of self-discipline even more difficult than I found it.

I think we need to acknowledge that the key aim, from a business perspective of the various social media applications, including TikTok, is to maximise the number of people of their platform and to maximise the time people spend on their platform.   As such it pays to make it addictive.

Conclusion

Am not sure my life will be that much worse off without TikTok but in a world where we often complain of not having enough time, and where we cannot invent or create more time (Note: am not sure that actually having 26hrs in a day would provide that much benefit as I suspect our current activities would just grow to fill the additional space), being able to free up some time by preventing myself from going down the TikTok rabbit hole can only be a good thing.

Or at least until I suffer a moment of boredom, depression or just simply human weakness, and reinstall TikTok once more, just to make sure I don’t miss out on a cat falling off the back of someone’s couch, or a dog comically bounding into a swimming pool.

A need for wellbeing and digital citizenship

If the news shapes our view of the world what has the last few years done for our students?

The last few years have been rather turbulent.  First there was Brexit, and the binary views which sprung up around that.  You were either pro the EU or against the EU, with little room for any balanced middle ground.    Then next the news was filled with the pandemic, with nightly figures of deaths and infections.   Again, there were binary views around government measures to reduce infection rates and to encourage vaccination.    And more recently we have moved on to the war in Ukraine, and massive numbers of refugees exiting Ukraine while fighting and bombing continues.

In each case our news was filled with interviews, videos and other content regarding the issues at the time, with the news on Brexit, Covid and Ukraine drowning out the other news.   Social media was equally awash with content on each topic as it arose.

And for students consuming content via social media, via Instagram, snapchat, TikTok and the like, the news was all the more enveloping of their lives.    Technology, which should be keeping us more informed, which seems like a good thing, might be overwhelming us, and influencing us, which is not such a good thing.

Then we have the issue of fake news, with this being reasonably easy to evidence with Ukraine where footage and images have been shared online reportedly showing events in the war, but where the actual source is previous conflicts and in one case, even footage taken from a video game.   With students consuming quick content, so short videos or images, rather than more detailed reporting, you have to wonder how often the source of the content is properly considered.    I will admit myself, when accessing the likes of TikTok, that I may not be as critical of the content as I might be had I consumed it through another media which didn’t present thinks as bite-sized content, where swiping through is encouraged.

We also have the issue of social media being purposefully used to manipulate the public, which is linked to fake news mentioned above.  This involves more targeted messaging and fake news designed specifically to manipulate the narrative with the Cambridge Analytica scandal coming immediately to mind.    There were many discussions of this kind of manipulation of the public via social media during Brexit, and also during the US presidential election which was around the same time.  

Looking at the above it suggests that, if the news does shape our view of the world, students views of the world might be that little bit bleaker than they once were.    They might also be that little bit more susceptible to manipulation and influence than previously.

So, what can we do?

Two thoughts jump to mind, with these being the need to increasingly consider wellbeing and also the need to consider digital citizenship.

Wellbeing for me isnt a bolt on, it is central to our lives.   Sometimes our wellbeing will be good, and sometimes, when things are hard, it will be not so good.    The key therefore is the ongoing process to manage our wellbeing, our physical, mental and emotional wellbeing, if I am to be a bit more exact.   And this requires a greater awareness of the status of our own wellbeing and of what we can do to influence it positively.    Now, I don’t think anyone ever taught or advised me of this, I think I picked it up through experience, plus a bit of reading around the subject, but I believe in this fast paced world we have responsibility to provide some support and teaching in this area for our students, which I know many schools already do; I just think we can never quite do enough in this area, so need to be constantly searching to improve and do more, with this more important than ever before.   

The 2nd area which comes to mind is Digital Citizenship and is something I have long been harping on about.   The world we live in is a technology driven world, so we need students to be more aware of the positives but also the drawbacks.    They need to see the balance which exists in using technology, plus see the extreme positives and extreme negatives through a pragmatic lens rather than the magnifying lens of social media.    Students need to understand the implications its use has on them and on the world, and how they can manage this, plus need to be alert as to how some others may use technology to their own gain.

Conclusion

It has been a difficult few years and there is no getting away from that.   These difficult times will have impacted on our view of the world, and on our wellbeing.    I think in general we are all that little bit more anxious than we were 2 or 3 years ago.    The key though, is how we manage the situation and move forward.   The key is resilience and agility to push through the difficulties and then drive forward to better things.

Is someone watching me?

The BBC recently posted an article in relation to remote workers being monitored in terms of their use of technology, when at home (You can read the article here).   Obviously, this issue has largely became pertinent given the pandemic and the various lockdowns which have resulted in individuals, including teachers, having to work from home.      The thought of your employer, school leadership or IT staff monitoring what you are doing seems “creepy”, inappropriate and an invasion of personal privacy but is it that simple?

A world of tracking

Before I look at remote working lets first consider the work devices used within a school and the monitoring that may be possible.    Within a school, especially larger schools, it is likely that school devices will have remote support software installed which allows for IT staff to remotely access a device in order to provide assistance without the need to actually visit the computer in question.   All well so far.    However, this functionality means it would be possible for IT staff to watch your screen and every action, every word typed, every social media interaction.  Now that sounds creepy already and we are only on school owned devices!

Your email and internet activity are also recorded.  For school email this likely means your emails are accessible by IT teams in terms of support but also in terms of compliance with GDPR legislation, to resolve Subject Access Requests, etc.   In terms of internet activity, although most data from and to websites are now encrypted, the timing of site visits, the sites visited, the device used, etc are all recorded.    And this happens irrespective of if you use a school or personal device connected to the internet via the schools infrastructure.

The above hints to the huge logs generated where IT systems are used, whether this be accessing the schools management system from a school PC in a classroom, or accessing MS Teams to deliver an online lesson from home.   As soon as we access the system information such as the device name, device type, username, time, IP address, etc are all logged.   And from this data further data can be generated, such as your IP address allowing for geographical information to be identified, albeit this isnt always reliable.    So, some for of tracking and/or monitoring will always be possible.

But what does it mean?

My view on this whole situation is that tracking/monitoring is unavoidable.   Data will be and must be gathered for the purposes of troubleshooting, auditing, legal compliance, etc.   So, the question becomes how do we manage the risk associated with the existence of this data?   And as to ability to access and monitor a specific user’s machine, and view their screen, again this needs to be possible to provide support so again it is about managing risk.

I think one of the key issues is that of transparency and acknowledging that data which could be used for tracking or monitoring purposes exists, and that remote access and screen viewing is also possible.   In doing so it is also important to be clear on the acceptable use of this data or these remote access solutions such as its use in trouble shooting.   In relation to remote access software, I also think it is important to have clear protocols in relation to usage and privacy, such as a requirement to request users approve before accessing a machine a user is currently using.    Access should also be limited on the basis of “least privilege” such that only those that truly need access and have a valid reason for access actually have access.

For me policy plays a key part in all of this.  In your Acceptable Usage policy should be clear indication of the creation of data and potential monitoring along with stated limitations as to where it can and cannot be used.    Additionally, I believe IT staff and those with admin access to large amounts of data, or to sensitive data, should be agreeing to a high-level access agreement which sets out additional requirements regarding their privileged access, plus sets out the higher level of penalties for misuse which comes out of increased responsibility.

Conclusion

As always, the newspaper article is a little bit sensationalist.   The reality isnt as simple.   Tracking and monitoring is possible, but the result of systems designed to support users and ensure systems which are robust and reliable, plus to ensure legal compliance, rather than for the purposes of invading individuals’ privacy.    As such the key thing is transparency and trust, with a little bit of policy thrown in just in case.

Social media: To legislate to control?

A lot has been made of online abuse and the need for social media companies to better monitor and police their platforms.  A lot has also been made of the potential need to legislate in relation to online abuse, but how easy, or not, would this be to achieve?

The internet

One of the big challenges is the internet itself and its distributed design.  It is designed such that no one user, company or even country has control.   It represents a single solution which crosses the national boundaries of most if not all countries in the world giving everyone the potential to use and impact on the internet.    This represents a particular challenge when looking at legislation.    A government might say that all platforms accessible in their country must abide by their legislation but what teeth do they have to enforce this when the company is based in another country.    And how do you stop users simply using tools such as VPNs to bypass local restrictions; Just one look online at forums related to expats living in countries with significant national filtering in place will highlight discussions of VPNs and other tools which can be used to bypass restrictions and the relevant legislation the restrictions are employed to enforce.  Or do a little digging into the ongoing piracy of video content and you will see this is a continuing problem despite efforts over a number of years to stem this issue.

Cyber security

If policing was to be properly established governments would need to be able to identify the users in country, their online identities, plus their online activities.    This has issues in relation to privacy and the safety of whistle-blowers and activists which I will cover shortly, however also represents a cyber security risk.    Such a database would be an enticing target for cyber criminals as a source of information which could be used for identity fraud and common fraud, but also in terms of blackmail or even attempts at coercion or subtle behaviour modification.   And we have already seen national identity databases in other countries fall foul of data breaches.

Anonymity

There is a genuine need for anonymity, where anonymity is often cited as one of the reasons for online abuse being so common online.   Activist and whistle-blowers rely on anonymity for their own personal safety.  Government dissidents in countries with authoritarian governments need anonymity.    There is also the concern that once a database of online user identities, tied to real world identities, plus online activity is created, albeit for good reasons, that it might not be used for less ethical or moral purposes in the future, or that its use might have inappropriate but unintentional consequences.    And this is before we consider the technical possibility of removing anonymity in the first place, something which given the internets design is fraught with difficulties including easy ways for users to bypass restrictions.

In relation to anonymity, although this feels like a key factor in online abuse, in my experience a large amount of the abuse is actually committed from users principal online accounts, those most likely to be identifiable back to a real life person.    The abuse either occurs as a result of joining a crowd, of being or feeling empowered by others to be abusive or of simply going too far spurred on by the ease and apparent lack of immediate consequence when using social media.    As such, maybe the issue of anonymity is a bit of a red herring.

Conclusion

I continue to see a lot of what occurs on social media as an amplification of the real world and society.    It is just that this amplification is that bit starker in its display of the ugliness which can occur in society.   I will however counterbalance this to some extent with how social media sometimes presents the very best we as a race have to offer.   I suspect a key reason for this amplification is that social media removes some of the risk factors and adds ease.   It is easy to be abusive to someone online especially when you know they arent likely to punch you in the face as they might do in real life.   It is also easy to be supportive, helpful and vulnerable away from the potential of embarrassment which may occur face to face.    It is however worth noting how very far we have come as a society compared with 100yrs or even 10 or 20yrs ago.    It is just that social media continues to amplify the small minority who have not progressed to same extent.

So, what are we to do about this?

I don’t have an answer other than to suggest we need to be aware of the amplification, be aware of others feelings, views, etc and be generally nicer to one another.   And I know that sounds a little soft and wishy-washy but I am not sure what more I can suggest.   Sadly, we also need to accept that the abuse emanating from the minority will likely continue, and we need to continue to take the little steps we can in challenging and sanctioning such individuals.   This will likely need to continue as little steps, one abusive user or group at a time; A leap to ban anonymity or heavily legislate social media is unlikely to be successful.

Social Media – A magnifier on society

Social Media acts as a magnifier on society.   This can both be a good thing and a bad thing.   In a good way it allows the quiet masses to have a voice and to express their opinion.   Before social media these people would not stand up or write an article in a newspaper or otherwise be able to express their views publicly.   Now they can easily like or share those posts they agree with, adding their voice to the message.   And if feeling strongly they can even add their own comments and thoughts reasonably safe in the knowledge that their voice won’t stand out.  We have seen this over the last few days as messages rejecting racism have been liked and shared in their thousands.   Social media has enabled a larger part of the population to contribute to the collective voice online.

But there is a flip side to this.  Social media provides a platform for a minority of people to share inappropriate comments with the masses, including racist views.    Prior to social media these people might have expressed the same racist views in public, but they never had much of an audience and the message never got very far.   Now, with social media, they can share their views instantly with millions of people.   They also feel safe in the knowledge that identifying them, where they have taken precautions, is not easy and therefore their comments are likely without consequence.    Social media has enabled this minority to engage a larger part of the population with their inappropriate messaging.

For me racism has no place in todays society and should be called out and challenged at every opportunity.    

I would however highlight an additional concern in relation to viewing society through social media, through the magnifier of social media, and how this can result in a distorted view on society.    Social media, to me, suggests that racism is more prevalent based on the large number of social media posts calling our racism, and by extension the suggestion of a larger number of racist tweets.   I am not sure, based on my experiences, it is more prevalent.   I suspect the availability bias is playing a part here.   I believe I heard racist comments more frequently when I was younger than I do now, so this might at least suggest we are heading in the right direction, albeit we can never stop until racism has been eliminated.

I also have concerns about the viral nature of social media, which can lead to massive outpourings of support or concern, etc, but for a short period of time, followed by people moving on to the next viral message.    Racism is linked to culture, and culture is changed gradually through consistent changes is behaviours, the stories that are told, etc.    Viral but short-lived messaging is likely to do little to impact culture and the prevalence of racism.  It is only prolonged and consistent changes in behaviour and messaging which will have this effect.   I personally started questioning the taking of the knee at the start of football events, as being a little bit of tokenism, however considering it again, maybe the consistent message conveyed is what we continue to need in the hope of long-term change.

Social media for me, isnt the problem here, but magnifies and possibly distorts it.   I am concerned that in seeking to address the issue at hand, currently racism in particular, we focus on social media and the social media companies.   Yes, they need to do all they can and possibly more than they are doing, but the issue is a societal one not a technology one.    Technology is just making it more visible, but maybe distorting the situation in the process.   

As such I think the key here is greater awareness as to how social media fits into situations like this.   How social media doesn’t just report and share news, but how it’s very use shapes the news and message being shared.   I hope this post maybe contributes a little to this awareness.

Online Safety: Another challenge

Keeping students safe in a world of technology, and where students are spending increasing time engaging with technology, and even learning via technology, is very important.    As I have written in the past, this is also becoming increasingly difficult.   Back in March 2021 I wrote about how internet filtering, something that was easy when I started out on my teaching career, is now far from easy and verging on no longer possible (Internet Filtering, March 2021).    As such, I suggested that internet filtering can now no longer be considered as a distinct action schools should take in terms of safeguarding, instead needing to be treated as one part of a larger process encompassing a number of stakeholders and actions, all taking within a risk management, rather than compliance framework.

In June I re-emphasised the above in my post, Keeping students safe in a digital world.   This time my focus was on Virtual Private Networks (VPNs) and the implication of students being exposed to TV marketing on the use of VPNs to maintain privacy.  My concern was that this would drive some students to using free VPNs where the safety and security of data may not be as certain as the apps suggest.  It would also serve to make it more difficult for schools to monitor student online activity in the interests of safeguarding.

Since the above June post Apple have held their Developer Conference.   Apple, like a number of other device or software vendors are being very “privacy” focussed following recent high publicised incidents around the privacy of user data and some very well known services.   With this, Apple decided to announce iCloud+ and their Private Relay functionality built into the iOS and providing VPN like functionality when browsing within Safari.    This means “baked in” VPN functionality provided at the operating system level, on Apple Devices such as the iPad which are widely used in schools.   Yet another challenge for online safety. Private Relay, a great facility for privacy but yet another blow for school IT and safeguarding teams seeking to keep students safe online.   Now my hope is that there will be some ability to control this functionality using a Mobile Device Management (MDM) solution however for now this isnt possible, and I suspect it may only be possible on “supervised” devices rather than on Bring Your Own Device (BYOD) Apple devices.   Only time will tell.

I often refer to a continuum, when speaking to sixth form students, existing between individual privacy on one side and public good and safeguarding as items on the other side.    So for schools this is the privacy of the individual student versus the schools responsibility to keep students safe, and therefore to monitor and filter online activity.  Currently the pendulum continues to move further towards the individual privacy side.    I wonder if this will continue or if we will eventually see some balance restored.   I also wonder whether, given the increasing ineffectiveness of the technical measures schools can put in place, do the guidelines in relation to safeguarding students online need to be re-examined.