Phones: again?

I have recently been thinking about phones in schools again, and yes I know we should be over this topic by now however the issue at hand had me thinking a little different about the issue.  Basically, I missed an important call on my mobile due to having Do Not Disturb in place as it was later on in the day.   Now having missed the call it got me thinking there clearly must be a way to override do not disturb such that a few key people could call me, and where my phone would ring, even when do not disturb is on. 

For those who aren’t aware Do Not Disturb allows you to set your phone up such that your notifications, your alerts, your calls and messages are supressed during certain hours of the day, such as in the evening when you are trying to get some sleep.  And you can decide which apps or callers you will allow.

It turns out it is very easy to set overrides such that certain individuals can call you, or certain apps will notify you even when Do Not Disturb is on.   And as I dug a bit further I found that you can also trigger on other things rather than just time, so you can setup a work mode which triggers when you are near a particularly location such as your work.   So, this mode might be setup to stop notifications and calls during the work day.

All of the above is good, but this got me thinking about all the functionality which is now in the modern smartphone specifically to help us manage distractions and our time on our phones.    I for example track my screen time which current averages at around 2hr 48min.   But the issue will all of this is who is actually telling people about all of this functionality and how to use it?    In my case I had a need to use it, and knew it was likely there plus how to search for the relevant info to get it all setup.  But what of the student who doesn’t identify a problem with their screentime, distraction, etc despite high volumes of use or even addiction?    What if the student who knows they have a problem but doesn’t know there might be a solution or doesn’t know how to find it?  

I cant help but think the tech companies do a good job of adding this functionality, thereby showing their efforts to protect people and to empower them to make decisions as to their device use, etc, however I am also conscious of their need to please their shareholders and to make profit.   The cynical me wonders if the lack of press or training or awareness regarding all this good functionality, is simply the outcome of needing and wanting to keep peoples eyes glued to their devices, and to keep the money flowing in.

Aside from the above, maybe we also need to acknowledge the issue isn’t solely the tech companies issue and that we, the users actually have some agency here.  We can choose to look at our phones less, to explore the safeguarding and wellbeing functionality which is available and to turn it on where possible.    Sadly, I feel the effort of turning on the functionality which might help us, is often greater than the effort required to point at vendors, blame them and expect them to address the challenge.

So have you looked at the wellbeing controls on your device or on your kids device recently?   And if not, it might be worth doing so.

Phones: a problem or a symptom?

I have recently been reading an interesting book on depression, Lost Connections by J. Hari, as this is something I feel I have struggled with at times, albeit this is a self-diagnosis rather than any form of clinical diagnosis.  Personally, I feel we all suffer depression to a greater or lesser extent, albeit maybe not clinical, at various points in our lives in response to events, challenges and other issues.   Within the book Johann points to societal issues being partly responsible for the increasing number of people suffering anxiety and depression, also talking about societal “junk values”.   This got me thinking about digital addiction and phone use, and my interest was further encouraged by a post from Mark Anderson where he provided some statistics in relation to phone use (see the post here).   But what if our addiction and increasing use of our phones, and other digital devices, isn’t the cause and the thing we need to seek to ban or reduce, but is actually the symptom of a different and broader issue?   Now I don’t propose to have solutions here but this post is about throwing out some thoughts and ideas.

Fame and likes

We have all at some point looked up to a famous person and thought, “I wish that was me”.   Whether it was a famous singer, an artist, or a movie star, I think we all generally want to be more than we are.  Now I am not sure if this want to be better, as measured by others, is intrinsic or whether it has been conditioned over time.   The adverts we consume on TV tell us we need to buy this body spray, or this car or that running shoe to be better so maybe we come to believe we need to be better.   Then in steps social media providing a measure of our fame, with the count of friends or likes, and we chase the thing we can measure rather than what we really want which is to be better.  And so we are forever on our phones seeking to post and share hoping to go viral and get all those likes, rather than looking towards ourselves, being comfortable in our own skins and seeking to be better but in our own eyes and on our own terms.   So is our excessive phone use a symptom of a need to have ourselves validated by others, rather than seeking to value ourselves?

Connectedness

I think it is important to acknowledge that we are still animals in some sense, albeit very intelligent ones, but we still have so much in common with the apes we came from back in the mist of time.   And as animals we need that connectedness, that social interaction of the herd or troop, and again in steps social media and our phones with connectedness on steroids.  Suddenly I am connected to friends, family and many more people, those with similar views and interests and this connection is constantly updating.    The issue here, as I have posted in the past, is that this online connectedness, although it appeals to our inner needs, it doesn’t truly address them so we find ourselves retreating from face-to-face, proper connectedness which will fulfil our needs, in favour of easier but shallow technology enabled connection.   We maybe therefore need to spend less time on digital connectedness and more time on actual connectedness.

Fear of missing out

I have already mentioned how our digital world is constantly updated and always on and this in itself breeds an issue, being we develop this fear of missing out (FOMO).   We are worried about missing out on important information, or the latest viral craze, so we seek to be constantly checking our devices for updates.   We might even become worried that there is something wrong when we haven’t received an update or our phone hasn’t buzzed for a period of time.    We build the habit of constantly checking our devices and constant vigilance to the call of our device for attention whether that be a buzz, a chime or a flashing screen.    But maybe there is another way and maybe we need to spend more of our time and our focus on being in the moment and experiencing our current environment, the company we are in, and the discussion, rather than bothering so much about the online conversations we may or may not be missing.

Efficiency and always connected

The world is only getting busier as we constantly seek to add more tasks and seek to get better.   If you were to look back on the last 6 months and list the extra things you are now doing I suspect we all would have at least a few items however if I was then to ask you to list the things you have stopped or been asked to stop doing, I suspect a shorter list, or maybe blank list would result.   If we do X this will make Y better sounds logical whereas if we DON’T do X this will be make Y better, doesn’t sit as comfortably with us.   And so we create this illusion of the need to be hyper-efficient, always on, always moving, and our devices are happy to play to this.   They facilitate us being connected, us collaborating, us communicating, anywhere, anytime.    But is this truly what life is about, to get as much done as possible and be constantly focused, or is there value in disconnection, quiet contemplation and meditation? 

Commercial interests vs. the user

In writing this post I couldn’t miss raising the issue of the device manufacturers and the platform developers.   They are commercial entities with shareholders.   They want profit and profit comes from keeping users buying their products and their services, keeping them using devices and staring at screens.   They want you alerted and increasingly are pushing further and further into our existence.    Most of our discussion on devices focuses on phones for example, yet now how many of us have wearables such that the notification is unavoidable being strapped securely to our wrists or in future, in the glasses we need to wear to see?   So these companies don’t have our best interests in mind and their approach to dealing with people’s concerns is to provide controls and data for the individual to use to manage their own usage.   But humans aren’t particularly great at doing what is best for themselves as individuals, just consider alcohol, smoking and more recently vaping.   And when faced with a societal push to stay connected, FOMO and much more, the companies must know that putting the control in the hands of individuals will see little progress, although it will allow them to say they did what they could while still reporting positive usage data back to their shareholders.  I think this is where society has to play a part rather than focusing on either the profit-focused companies, or the ill-equipped individual to solve the problem.

Conclusion

I suspect I could write much more on this topic and as I write this I can see so many opportunities for further research.    Rather than seeking to ban, which I am against, or manage, which I am much more supportive of especially in schools, do we need to ask the question of why we are all so quick to reach for our phones and digital devices?   If we consider our usage a problem, then surely we need to get to the why, the cause, as opposed to seeking to address the symptom which is the eventual usage.   Maybe even discussing this with our students will help?

My sense is that a large part of the issue is the values which society currently applies to us.   It isn’t enough to just be me but I have to attain status, I need to be hyper-connected, I need to work stupidly hard and efficiently, and I need to show other people all of this, and our devices deliver on these needs, or at least appear to.   As long as we continue to address this at an individual level, which tends not to work, we fail to get into the bigger problem but how do we bring about societal change?   One step at a time?   One blog post at a time maybe?

Some DigCit resources

Following on from my blog in relation to internet safety day I thought I would share some of the actual presentations I have used recently with students when discussing differing parts of internet safety. 

Now all the presentations are on the short side as they are designed to provoke thought and further follow up discussions with each presentation designed around a 5 to 10 minute assembly.   

I hope the presentations are useful or at least provide some ideas.   I am also open to any thoughts or ideas for other topics or areas which should be included in future presentations.

This session revolves around a tweet from a parody HRH Prince William account which was picked up by some UK radio broadcasters as fact where there was no evidence to support the figures quotes. Also the session looks at the possible impact from generative AI in relation to fake images or video.

This session is very much about asking the students if they feel comfortable with their technology use and then discussing ways that a balance might be achieved. It is also important to discuss how “screen time” is an overly simplistic measure and that all screen time is not equal.

This session focuses on binary arguments and how two opposite viewpoints can actually both be true or both be false. Some discussion of why people might seek to exploit binary arguments, social media algorithms and echo chambers is also included.

This session focuses on some examples of social engineering and how human habits can be used against us by malicious individuals. The key message is the increasing “sophistication” of attacks and therefore the need to be more vigilant and careful.

This session looks at data breached from sites and how this is leaked online. It may be worth getting the students to use HaveIBeenPwned if possible to see how many students already have data leaked on them online. The key closing point is that as we do more online we need to be aware of the resulting increasing risk.

The key feature of this session is the predictability of human choices in relation to passwords.You may wish to use the Michael McIntyre cyber video here or simply ask students about where the capital letter, number and symbol in their passwords might be.

This includes reference to an OSINT tool which allows you to identify the date and time of a photo based on the position of shadows within the photo;  This illustrates how even simple things might give away information about us.

It also contains a “pick a number” to illustrate how we can be easily influenced.   As the presenter you would stress the trackers slide and “14” to see if you can then encourage students to select 14 later in the presentation.If we can be that easily influenced then what might social media and other individuals be able to do with much much more data?

This session looks at public good vs. individual privacy and how these two issues may be at opposite ends of a continuum. The key is to show how we need to find a balance between these two extremes.

Safer Internet Day 2024

I thought I would put a post together to coincide with safer internet day, the 6th Feb 2024.    Safer internet day represents an opportunity to stop and recognise the importance of online safety however it is also important to recognise that our understanding of digital risks isnt confined to a single day but is something we should be constantly considering.

I will be honest and say that I generally feel we do not do enough in relation to digital citizenship, which is the broader concept which encompasses online safety, in schools.   Yes, schools have safer internet day, they have content in their PSHE education programme plus in their KS1, 2 and 3 Computer Science programmes, and for those students choosing to study computing or IT subjects at A-Level or in vocational qualifications, but it is limited content and this is against a backdrop of increasing use of digital tools and increasing sharing of data.   We believe basic maths and basic literacy are requirements for all; I believe basic digital citizenship should also be a requirement and a subject in itself.

So, if it was a subject what would the topics be?

I already try and deliver sessions for students throughout the year in relation to number of digital citizenship topics which includes:

Fake News

I think this is a very important subject given the ease with which fake images and even fake audio and video can now be created through the use of Generative AI.    Recent cases with fake Taylor Swift videos and fake Joe Biden audio are a case in point. How might we tell the fake from the real, but also what about those individuals who say or do something inappropriate only to claim they didn’t, and that the footage or audio is fake?    How do we establish truth in world where we can no longer believe what we see or hear?

Big Data

We are constantly given away data, and more than we realise.  And it isnt just about the data we give away, but also about the data which might be inferred from what we give away.    Consider where you live, the car you drive and where you shop for example;   How might this information help to infer something about your wealth or earnings?    What does your weekly shop say about you and your family? And remember it doesn’t need to always be right, it just needs to generally be right more than its wrong to have value.    Then there are the organisations willing to pay for your data or to sell your data on.   Might we get to a point where, through data, some companies know more about us than we know about ourselves and at that point, what is the potential for us to be influenced or even controlled.

Binary arguments and echo chambers

The medium used to communicate has an impact on the message, with this being all the more apparent on social media where things go viral, with agreement, or viral with disagreement so very quickly.  The medium shapes our views through its algorithms, connecting stories with those likely to engage either in agreement or in disagreement, thereby enhancing divides and encouraging most discussions to descend into binary arguments.    As you engage with social media, it will try to feed you the info you want to hear, which therefore tends to reinforce the views you already have rather than providing alternative viewpoints.   So in consuming information and news from social media we need to be conscious of how social media works and therefore how it might shape the news it presents and eventually our viewpoints.

Balance – Public Good and Personal Privacy

Balance as a concept is something I believe strongly in.   For ever advantage there is a corresponding risk or draw back.   And in some many decisions we operate on a continuum rather than with polar opposites.  Take public good vs. personal privacy for example.    We want to be safe so expect the police and intelligence services to monitor in search of terrorists and other threats.   Yet, we also want our individual privacy so to be free from monitoring.    Can we have both?     The answer is no, we need to find some balance between a “reasonable” level of surveillance and monitoring balanced out against a “reasonable” level of individual privacy.    Taking the discussion of encryption, the challenge here is that weak encryption is weak for all, so monitoring anyone is difficult without putting all at risk.   Now there are solutions here such as monitoring at the device level where encrypted communications need to be decrypted to display, however this is difficult as it requires access to the device.    We basically have an imperfect situation, and sometimes in this complex world we need to live with imperfect.

Cyber Security

As we use more digital tools, share more data and generally use technology more and more we need to be more and more conscious of cyber risks and how to remain secure.   This is in the accounts we use, the data we share, the use of MFA, but also in the devices we own including updating our devices such as laptops and phones, but also the increasing number of IoT devices we have such as smart plugs and voice assistants.   We need to give some consideration to cyber security in all purchases, and in each system or service we seek to use.  It may even be necessary to accept that every piece of technology used represents increased risk, so the question then becomes is the gain from using the service sufficient to outweigh the risk?

Addiction and Being Human

How many times have you seen a major event such as a new years fireworks display with people all holding up their phones to film the event, so all experiencing the event through their smart phone screen?   Or have you been on a train or in a restaurant and seen countless people staring at their phones?   Is this the way we want to live and does this change our experience of life?  Yes it might give us a nice video of the event which we can then go back to in future but how often do we do this and if we didn’t record the event would we spend more time interacting with those around us, with this resulting in something more memorable?    What does being human look like in this technology enabled, technology curated and technology filtered world?

Conclusion

The above are just some of the areas I discuss with students and I note I don’t have the answers as I spend a little too much time on digital devices, I share more data than I likely need to, etc.  What I do hope to do however is build awareness and start a discussion as this is I believe what matters.    We need to be thinking about the challenges and risks and ensure our students, our young people, are aware of them and are making educated decisions.

I hope everyone has an enjoyable safer internet day;  Stay safe online!

Digital Divides ?

The BETT Show got me once again thinking about the digital divides, and I am very careful to use the plural here as I believe there are many digital divides currently acting on our students.   Now I have been challenged in the past over the existence of a digital divide (note the singular here which I think is important) with evidence of widespread access to devices being one of the key points of challenge.  One piece of research, for example, suggested as many as 98% of UK 16-17yr olds owned a smartphone.    Based on this data almost all children have access to both a device and also internet access suggesting ubiquitous access and no digital divide however, although this may tick off the divide related to access to a device and also access to the internet, what about the other divides?

Its not the device that matters!

When looking at school technology strategy we have long identified that a strategy to simply put a particular device in staff and student hands doesn’t work.    Its not about having the device, although this is an important foundation, its about considering what it will be used for, how its use will be included in teaching and learning, what support is available in terms of technical support but also subject related technology use support, the overall culture of the school in relation to technology use, the confidence of teachers in using technology, etc.    In terms of students and the digital divide, there are similar issues.

Have it, but don’t use it here

One obvious divide for students relates to school technology strategy.   In some schools technology has a key part to play, so 1:1 devices might be available, class sets, or BYOD might be supported, but generally it is a case of technology is encouraged.    Other schools may have far more limited technology and may ban the use of mobile devices;   All of a sudden our ubiquitous access to devices and the internet isnt nearly as ubiquitous if students arent allowed to use their devices and no devices are provided while in schools.   Those students who are encouraged to use technology in school, across their lessons, benefit from lots of learning opportunities in relation to technology, while for those without, these opportunities don’t exist.

Supportive networks

For some students, use in school provides them teaching and support in relation to technology and its use through advice from teachers, support staff such as IT staff in schools plus also from their peers who like them are using technology within the school.    This support helps, and ongoing use also helps as it allows students to build confidence in the use of technology, which then supports experimentation with new technology or new functionality within existing platforms.    But this support isnt uniformly available with some students receiving far more than others.   And the issue of support extends beyond the walls of the school to home, where some students will benefit from engaged parents willing to discuss technology use, the benefits and risks, where for other students they may be left to their own devices, which may devolve towards doom scrolling social media apps.

Digital Citizenship

And in some schools there will be robust discussion of social media app and the broader issue of digital citizenship. Students will therefore be more aware of the risks and challenges associated with social media including issues around big data, influence, bias and echo chambers, etc. This will be in addition to the meagre amount of discussion which may be supported in PSHE lessons or within the computing science curriculum which might be all some students receive. Plus, where there is robust discussion, there is a greater chance for students to ask questions or seek support.  

Maybe you need more than a phone

We also need to recognise that the smart phone isnt always the best tool and sometimes we need a bigger screen, a keyboard and a mouse.   So, although ubiquitous access to a smartphone is a good start it isnt the solution.    A study looking at device access for homeschooled students in the UK found that slightly more than half of students had to share a device with others in the household for example.    Again, we have some students who benefit from their own device which they can personalise, use and build confidence with, and other students who do not have this benefit.

And then there’s the new tech; GenAI

So, from the above I hope I have highlighted some of the divides impacting on students and this is now further compounded by new technology such as GenAI.   In some schools this is being discussed and students are being encouraged to learn about and use GenAI solutions, but in other schools GenAI is out of bounds and banned, or the students simply don’t have access to the basic technology to properly explore GenAI.    For those students learning about AI, they are likely to be more confident and familiar with GenAI solutions they encounter as they exit school and either continue their studies or enter the world of work, whereas those who have been deprived of the opportunity will be presented with a steeper learning curve.

Conclusion

For me there are definite digital divides and I feel current development around GenAI is only going to widen these divides.    Access to a device and internet, the ubiquitous smartphone, is a good start but it is akin to giving devices to teachers with no professional development or support.   They might get some use of the devices but never what is truly possible.  And looking at students and the smartphone I suspect what they might get out of their devices will be a lot of YouTube and TikTok content rather than something more meaningful.   

We very much need to seek to address the digital divides and for me the place we need to start is with the basic building blocks in terms of infrastructure and devices in schools.   Only once this is reasonably consistent across similar types of schools can we then move on to tackle other digital divides.

References

UK: children owning mobile phones by age 2023 | Statista

Over half of home-schooled children in the UK have only shared access to computers – Institute for Social and Economic Research (ISER) (essex.ac.uk)

Digital Citizenship

Its digital citizenship week this week so I thought I would share some thoughts. Now, I have discussed and raised the issue of the need for more time in schools to discuss digital citizenship.   Whether it is discussing the increasing need to be aware of cyber risks, or the increasing amount of data we are now sharing online or the increasing risk of our behaviours being influenced and manipulated by the tech tools we use, they all need discussion.    Schools and colleges are looking to prepare students for the uncertain, but clearly digital futures they face, but still the focus is on narrow coverage of “online safety” when the risks now extend way beyond the content being covered.

And all of this is before generative AI made its appearance and became so publicly available late in 2022.  Suddenly fake news is much easier to accomplish through generative AI tools that can easily modify content in terms of the video or audio, both being quick to achieve and also to achieve convincingly.    Suddenly the phishing emails which were often laden with spelling errors or design issues, can be fed through a generative AI solution such that the resultant output is convincing in its styling plus free from grammar and spelling errors.   In terms of influencing people through social media, generative AI allows for content creation to be automated with each piece of content being “unique” but with the common influencing message, far quicker than was possible previously.    We also have the issue, that as we all start to use more and more AI, such as the excellent generative AI tools available, we leak yet more data online, where the generative tools online are more powerful than ever in inferring yet further data.   At an event I attended recently it was suggested that if you fed your prompts from generative AI back into a generative AI solution and asked it to profile you it would do decent job of working out things like age, career, education, etc just based in the info you already put into generative AI tools.

So maybe post the free availability of ChatGPT and subsequently of so many other AI tools, or tools where generative AI such as ChatGPT is embedded, it becomes all the more important to discuss digital citizenship with our students.   And maybe generative AI, if it frees educators up from the more administrative and basic tasks of education, provides both the issue and the solution.  Maybe if generative AI and the AI solutions yet to come free us up from the mundane and the basic, maybe it will finally provide time and resources to cover digital citizenship at a time where it may be all the more important.

The path of the world is towards increasingly digital lives with the pace of digital technology advancement being quick.   Regulation and governance is slow by comparison leaving us with a need to fill the void.   I don’t have the answers for the future although I am positive as to the potential of technology to aid, enhance and even redefine our lives, however with this there is always a balance and therefore risks and challenges.   This is where digital citizenship in schools comes in, in providing opportunities for the risks and challenges, both current and potential future risks and challenges, to be discussed and explored.   We need to develop students who are aware and questioning of technology implications, rather than students who blindly adopt technology without consideration for the future.   I believe we have a long way to go to address this issue but every step, every additional discussion, every assembly, every lesson including reference to digital citizenship being an additional step in the right direction.

Image courtesy of Midjourney

End to end encryption: Ensuring privacy or increasing the risk of harm?

There have been some recent calls for Meta to refrain from adding end to end encryption to the messaging functionality in some of their apps, in relation to safeguarding.    It makes initial sense to consider the potential risk of harm to children and others through harmful online content or contact.   How can agencies, schools and individuals protect people, including the young, from harmful content or contact when they are unable to identify the content due to encryption?   How can criminal individuals be prosecuted when key evidence is inaccessible due to being encrypted?   The challenge here however is establishing some of the possible implications of either weakening or removing encryption as like most things there is a balance and improvements in monitoring and detection through removed or weakened encryption will result in other less positive counter implications.   I note that sticking with the current level of encryption, where technology moves on and where criminal skills and approaches continue to develop likely equates to a weakening over time meaning we can either continue to strengthen our approach or, by doing anything else, reducing or doing nothing, choose to effectively weaken encryption. So, what are the general implications should we choose to reduce or remove encryption, rather than seeking to strengthen it?

Increased vulnerability to cyber attacks

Encryption is a key tool used to protect data and information from unauthorized access. Weakening or removing encryption makes it easier for cybercriminals to break into systems and gain access to sensitive information which in turn puts individuals, including children, more at risk.  At a time when individual privacy is such a hot topic anything which may reduce or put at risk this privacy is of concern.

Increased surveillance

Weakening encryption can also make it easier for governments and other organizations to monitor online activities and communications.  Now it may be that this monitoring is done in our interests, in the interests of safeguarding for example, but there is the potential for data or monitoring solutions to be mis-used.   It could be used for invasive monitoring and surveillance, to identify individuals based on beliefs or political beliefs for example.   It may be used to challenge or silence views counter to the government or intelligence agencies.   It may be that the data gathered allows for other data to be inferred where this then violates individual privacy and freedom of speech.  Or it may be that these systems used correctly and ethically suffer data breaches resulting in the data or systems being misused for criminal or unethical purposes.   Increased surveillance capability thorough weakened encryption has significant potential as a risk to individual privacy.

Loss of trust

Weakening encryption can erode public trust in online communication and commerce. This in turn can lead users to be less likely to trust systems the digital systems which we increasingly require in our day to day lives.    The potential impact should we no longer be able to trust our online communications and collaboration platforms, our online banking, online shopping, etc would be very significant indeed.    It may also lead individuals to seek to use systems in the darker recesses of the internet where these systems may be perceived as more secure and outside government monitoring or surveillance, but where other implications or risks may exist.

Negative impact on businesses

Related to the above, weakening encryption could also have a negative impact on businesses that rely on secure online communication and transactions. This includes e-commerce sites, financial institutions, and healthcare providers.    If encryption is weakened or removed then users of online services are more at risk, plus the services themselves are also more at risk.   Individual users may lose data and become subject to fraud or other cyber crimes while the breached organisation suffers reputational damage, legal claims for compensation plus the overall cost of recovery following a cyber incident.    Basically, no-one wins, other than the cyber criminals that is. 

Conclusion

The issue here is one of balance, the balance between individual privacy and protecting individuals from harm online, where providing privacy will provide the individuals who may cause harm with protection which means that harm is more likely.   But where providing protection against online harm will weaken an individual’s privacy even where their motivations and actions are honest and good.    Sadly, we cannot provide privacy online for some but not for others.   Either privacy and security it built into systems, or it is not, as we have no way of identifying those who may or may not cause harm.   

There is also an issue of pragmatism.   If we reduce the privacy level of some services by not enabling end to end encryption for example, then users, and particularly those seeking to do harm, will simply move to those services which provide more security and provide end to end encryption.    I have seen it myself in the unknown user who DMs an individual on a major social media platform, before, after a short series of messages, suggesting moving to an alternative “better” platform as they know this is better suited to protecting their privacy as the seek to go about their likely malicious aims.    

Overall, there is no perfect answer here.    I think technical security and privacy is key to the digital world we live in but also we need to keep individuals safe online.   Sadly, these two requirements are largely at opposite ends of a continuum.   I suspect a reduction in technical security would have wider implications on the world than increased security although I note it isn’t a zero-sum game.  Personally, I think we need to err-towards greater encryption but while seeking to mitigate the safeguarding risk as much as reasonably possible by increased discussions, training and education regarding safety and risk online.    Not a perfect answer, I know, but as I said, there is no perfect answer and anyway, we don’t live in a perfect world.  

Online Safety Bill

So, the online safety bill is once again back under consideration and already looking like its getting softer.    The proposed dropping of the “legal but harmful” clause being another example of a focus on individual privacy winning out over monitoring and filtering in the interests of public, and child, safety.  

Now I understand the challenge here of balancing individual privacy and public good.   Individual privacy is enshrined in the principles of basic human rights, yet we want our governments, intelligence services, police and even schools to be able to monitor and filter content to keep people safe and to proactively identify potential threats to the lives and wellbeing of those under their care.    These are opposing points on a continuum and each step made positively in one direction is usually at the expense of the other position.   More privacy means less ability to monitor/filter in the interests of public good.    More filtering/monitoring means less privacy and the risk of data being mis-used or leaked.

To me it is clear that there is a definite tendency towards individual privacy winning out in this argument.  Apple quietly dropping its plan to monitoring iCloud accounts for Child Sexual Abuse Material (CSAM) and now the UK government looking to remove the “legal but harmful” clause being two good examples of how privacy is winning.    I doubt this will change, at least for now, and especially as more and more organisations are seeing fines and reported issues as to how they are managing the data of individuals.   So, what is the solution in particular in relation to schools where online safety is such a key and important topic and issue?

I think the key here is in establishing very clearly the need for social media vendors to look after children using their platforms.    Maybe the “legal and harmful” clause is inappropriate when applied across the general population but surely we must be able to agree we need to protect our children and therefore identify some of the materials which might be legal yet harmful to them.   And it isnt just the content that is the issue, but the medium and the algorithms feeding the content.   Is it right to categorise a child, where children are more impressionable, and then field them a specific type of content constantly, based on trying to keep them hooked on an app?   Might this not shape their world view such that they see things as rather binary rather than the more nuanced and complex nature of the real world and real life?    Is it right to feed children almost constant streams of content, including potentially harmful content, or provide contact with unknown individuals?   We need to make the vendors consider the medium they providing along with their algorithms and the potential impact they have rather than just pointing to the content as the issue which needs dealt with.

I will admit I saw problems with the Online Safety Bill from the outset, and even more so given it was first proposed as a draft in May 2021, over 18 months ago;   In the technology world 18 months is a long time and a lot can happen so this highlights how legislation will always be playing catch up.    My original concerns, I will admit, were more on the technical side of things.   Privacy points towards end to end encryption and other security solutions which then hamper monitoring and filtering, plus there is the challenge that social media vendors cross geographic jurisdictions, where different governments may have different motives and ethical standards for the monitoring they may require or request.    Also any weakening of security and privacy may in turn increase the likelihood of cyber criminals gaining access to data. So my concerns were that, although the bill might be well meaning, it would be difficult or impossible to effectively implement.

That said, something needs to be in place and I think this is the point we have now got to, that we need to accept something imperfect as a starting point and then hopefully build from there.    I will also admit that the responsibility for online safety doesn’t just belong to the centralised provider of social media and other services, or to the centralised government of the nation within which a user resides.  When we talk of online safety and children, parents and guardians also have their part to play, as do school pastoral teams, form group tutors and teachers, friends and other members of a child’s wider social and family circle.   And maybe this focus on the online safety bill for a single answer may actually be having a negative impact in taking our eye of the need for a wider and collective effort to keep children safe.

I suspect the solution at this point is to get the online safety bill into law.  Its better than nothing and can add to the wider efforts required, and hopefully be seen as a step in the right direction rather than an endpoint.

Big Tech and balance?

Within the technology space there are now a small number of hugely powerful players.   These players, including Microsoft, Google, Amazon, Meta (previously Facebook) and Apple, are now so dominant that their impact is felt beyond the technology space.   With this comes some advantages, but as I have often written, we live in a world of balances, and therefore there are also some potential risks or drawbacks.

Writing this blog piece came as a result of reading an article in relation to Sony and the PlayStation brand, a large and powerful player within the gaming space, where they are being sued in relation to breaching anti-competition laws, using their powerful position to apply pressure to games developers and publishers which then drive up game prices and therefore profits.  You can read the article here.    We have previously seen similar lawsuits levelled against both Google and Amazon in relation to their shopping platforms either favouring suppliers or brands based on their relationship with Google/Amazon or favouring their own brand products in the case of Amazon.    In the case of Google/Amazon the concern relates to their power resulting from providing the search functionality for users while also either providing products themselves or providing advertising services to brands/suppliers.  

And this isn’t the only risk in relation to these big players.    In the case of Microsoft, Google, Amazon and Apple, they store our data for us in the case of Google Drive, OneDrive or iCloud.    Where this is free storage, this is convenient for us, but if we aren’t paying for the service how are the ongoing costs being covered?Recently France suggested that schools not use the free services of Google or Microsoft for this reason.

It may be that in using their services for search or for purchasing items or for music, etc, they gather data about us.   So as the large players, that most of us will have regular interactions with, they will be gathering huge amounts of data about us which they then can use to profile and predict our behaviours.    Now we might accept that they do this for good reasons such as improving their services, etc, however if we believe that some of their corporate practices have been questioned it may also be reasonable to consider that they could seek to misuse this data.    And in the case of those services supported by advertising revenue it would be easy to see how they might use the data to influence our decision making and that’s before you consider the possibility of these services, themselves, suffering a data breach resulting in all this data being leaked onto the public internet.

There is also the issue of truth;  In the case of Google and Facebook, which allow users to access the news and other current affairs information, they control the information they present to users.   How do we know that they are presenting the “right” information?    (I note that establishing what is “right” or “the truth” is a problem in itself, however is outside the scope of this short post)   How confident are we that the information being presented to us is absent of bias?   Do the algorithms present sufficiently broad viewpoints or just present a singular viewpoint, that which the algorithm thinks we want to hear? In trying to keep us engaged with the platform do the algorithms tend to only present viewpoints we are likely to agree with, thereby creating echo chambers and online binary arguments?

The significant issue here is the fact that we havent been through this kind of technological change ever before in history.   Yes, we had the invention of the printing press, of radio and of TV, but these didn’t impact on society with quite the same pace of change as the combination of smart phones, internet access and social media.    And the difference in pace of change is so easily observed in the rate of adoption with the TV taking 22yrs to reach 25% of market access while Facebook only took 2 years.   We are now in a situation where so many of us are carrying an internet enabled device in our pockets, and regularly interacting with apps, including search and social media, where these apps and their underlying algorithms are constantly gathering data in order to hone and adjust the content which they serve us with.  

Now I know when I talk to students they don’t want to give up the convenience of google search or amazon for shopping, or the interesting content, including that from friends and family, provided by TikTok, Instagram, Snapchat, etc.   I will admit I am equally reluctant and would find not having google and twitter difficult.  

So what is the answer?    

Well I think the answer is simply to discuss and acknowledge that these services and the vendors that provide them, Google, Amazon, Apple, Microsoft, Meta, etc, provide us beneficial solutions, however in most things there is a balance.   We need to be aware of this balance, we need to discuss this balance with students such that they know the drawbacks and risks associated with the vendors and solutions we now so commonly use.It may be that our current technology revolution resolves itself much like TV, radio and the printing press of the past, however in case it doesn’t, I think we need to develop our overall awareness of the risks.

Online Safety – Meta/SWGfL Event

This week included a little visit to the Meta offices in London for an SWGfL event focussed on online safety.   Now I decided to attend this event as I believe in the importance of online safety and in the wider issue of digital literacy or digital citizenship.   I am also highly conscious of the challenges from a technology point of view given the ongoing focus by technology vendors on individual privacy, including the use of encryption, over public good and online safety.It was also a great opportunity to bump into Abid Patel although he had to remind me as to the need for the obligatory selfie.

Digital Literacy

During the course of the event the term digital literacy was used which I take to mean similar to “normal” literacy, but in terms of digital media.   Now I don’t think this term goes far enough although I am happy for others to disagree with me on this.   For me digital literacy may cover the users use of technology and understand how and when, etc, but it doesn’t stretch to the issues of behaviour online and the online identities we develop as we post increasing amounts of content online.   As such my preference over the term “digital literacy” has always been a focus on “digital citizenship”, where digital literacy may form a part of this. It may seem a minor point, but for me it is an important point.

Being online

One message which was quite clear from the event was the extent that our students are now online.    The opening session quoted figures of 3hrs and 36mins as the average time spent online by 9-16yr olds.   If we assume 8hrs sleep, that’s over 20% of a child’s waking day spent online.   And for weekends the figure only increased, plus it was noted that children are increasingly “multi-screening” where they are using multiple devices such as a laptop and phone at once thereby allowing them to consume more content in less time.    From a risk point of view, the more content consumed the greater the risk of inappropriate or even harmful content being consumed.

Another similar statistic shared identified below 5% of internet users in 2003 as being under 18, yet now the figure standards at almost 40%.   A big jump, suggesting a clear trend, again highlighting how our children and students are now highly active online.  

Guidance and help

In relation to help dealing with living online it was noted that parents were viewed as the main source of help and support in relation to issues experienced online with teachers taking second place.    Unsurprisingly though a survey of teachers noted training and the ability to keep pace with technology being two barriers towards being able to properly support students online.    In relation to keeping pace with technology, I think we need to acknowledge that we can never really keep pace.    On reflection, I found myself more able to keep pace when I was a younger teacher than I am now; this may be age related however it could equally be technology related in that the pace of tech change is now quicker than it was when I was younger.    I think here the importance isn’t necessarily knowing the answers but about being open about not knowing the answers and accepting that the discussion with student may itself have value.

In terms of training this makes me think of a poster in my office regarding students never asking for professional development, or training, on using technology.   Now I will note this statement is overly simplistic but aimed to get across a point regarding the massive number of resources and help available online plus the increasingly intuitive nature of [simple] apps.  Maybe we need to be more willing to “just Google it” in relation to technology?   That aside, the issue with training is where is it going to fit into the already busy curriculum and crowded workload of todays teachers?    Surely it cannot be yet another thing added, and who every subtracts, from workload?   I don’t have an answer to this one however I think the topic needs to become something regularly discussed in staff rooms, insets, assemblies, etc.  It needs to become part of culture however with this I recognise it may take time for this change to occur, at a time when technology changes occur so much faster.    So, for now, for me, I am regularly trying to prompt discussions and thinking in relation to digital citizenship just by doing simple things such as highlighting news stories in our school weekly bulletin.   The individual effect is low however my hope is that over time it will build awareness and discussion.

Conclusion

The event had a fair few points of interest and things I could take away.  Far more than I have outlined above. I had hoped that it might help and answer the challenge of balancing out the need to protect students with the prevailing narrative regarding the importance of individual privacy.   Sadly, I don’t think the event provided any real answers in this area beyond some evidence that Meta are partnering with organisations to help to address the problem, and that efforts are being made.   Are these efforts enough?   Am not sure there will ever be enough effort as any single loss of life or significant impact on the life of young person will aways be considered sufficient evidence that more could have been done.    The fact Meta are supportive of a programme allowing individuals, including children, to log a fingerprint of non-consensual intimate imagery such that it can be automatically quarantined and even removed is good news.   I actually find this interesting given Apple seem to have allowed their proposal of scanning for Child Sexual Abuse Material (CSAM) to quietly disappear from discussion. So maybe there is progress being made after all?

It was a useful event.   The more we can discuss the challenges the more they evident and the greater chance we can seek to manage and mitigate them together. And this is another takeaway, that the event marked a number of individuals and organisations coming together to discuss the issue; This needs to continue and grow in frequency.