Phones: again?

I have recently been thinking about phones in schools again, and yes I know we should be over this topic by now however the issue at hand had me thinking a little different about the issue.  Basically, I missed an important call on my mobile due to having Do Not Disturb in place as it was later on in the day.   Now having missed the call it got me thinking there clearly must be a way to override do not disturb such that a few key people could call me, and where my phone would ring, even when do not disturb is on. 

For those who aren’t aware Do Not Disturb allows you to set your phone up such that your notifications, your alerts, your calls and messages are supressed during certain hours of the day, such as in the evening when you are trying to get some sleep.  And you can decide which apps or callers you will allow.

It turns out it is very easy to set overrides such that certain individuals can call you, or certain apps will notify you even when Do Not Disturb is on.   And as I dug a bit further I found that you can also trigger on other things rather than just time, so you can setup a work mode which triggers when you are near a particularly location such as your work.   So, this mode might be setup to stop notifications and calls during the work day.

All of the above is good, but this got me thinking about all the functionality which is now in the modern smartphone specifically to help us manage distractions and our time on our phones.    I for example track my screen time which current averages at around 2hr 48min.   But the issue will all of this is who is actually telling people about all of this functionality and how to use it?    In my case I had a need to use it, and knew it was likely there plus how to search for the relevant info to get it all setup.  But what of the student who doesn’t identify a problem with their screentime, distraction, etc despite high volumes of use or even addiction?    What if the student who knows they have a problem but doesn’t know there might be a solution or doesn’t know how to find it?  

I cant help but think the tech companies do a good job of adding this functionality, thereby showing their efforts to protect people and to empower them to make decisions as to their device use, etc, however I am also conscious of their need to please their shareholders and to make profit.   The cynical me wonders if the lack of press or training or awareness regarding all this good functionality, is simply the outcome of needing and wanting to keep peoples eyes glued to their devices, and to keep the money flowing in.

Aside from the above, maybe we also need to acknowledge the issue isn’t solely the tech companies issue and that we, the users actually have some agency here.  We can choose to look at our phones less, to explore the safeguarding and wellbeing functionality which is available and to turn it on where possible.    Sadly, I feel the effort of turning on the functionality which might help us, is often greater than the effort required to point at vendors, blame them and expect them to address the challenge.

So have you looked at the wellbeing controls on your device or on your kids device recently?   And if not, it might be worth doing so.

Technology: Balancing Benefits with Risks

In our modern era, technology permeates every aspect of our lives, transforming how we work, communicate, and live. The advent of the internet, smartphones, artificial intelligence, and other technological innovations has brought unprecedented convenience and immediacy, significantly improving efficiency in countless areas. However, this rapid advancement is not without its downsides. As we become increasingly reliant on technology, we must grapple with the risks and challenges that arise, including cybercrime, data protection concerns, and the detrimental effects on our ability to focus.    So how do we find an appropriate balance?

The Benefits: Immediacy and Convenience

One of the most significant advantages of modern technology is the immediacy it affords. The ability to access information instantly, communicate across vast distances in real time, and perform tasks that once took days or weeks in a matter of seconds has revolutionised the way we live and work.   This immediacy extends beyond communication to other areas, such as online shopping, where you can order products with just a few clicks, expecting next day or even same day delivery, or the healthcare sector, where telemedicine enables patients to consult with doctors without needing to visit a clinic in person.

Convenience is another major benefit of technology. The rise of smart devices and automation has simplified tasks that used to require considerable effort. For instance, smart home systems can control lighting, temperature, and security, while virtual assistants like Siri and Alexa can perform tasks such as scheduling appointments, sending messages, or even ordering groceries. In the workplace, technology streamlines operations, with software automating repetitive tasks, allowing employees to focus on more complex and creative aspects of their jobs.   While in schools AI can help students and teachers create, refine or assess materials, or can help with translation, simplification and other process which support or even enhance learning experiences.

These conveniences and immediacy should improve quality of life, offering more time for leisure and reducing the stress associated with many day-to-day tasks however my sense is that they often just allow for more to be expected and reinforce the “do more” and efficiency cultures which I feel exist.

The Risks: Cybercrime, Data Protection, and Cognitive Impact

The advantages of immediacy and convenience come with significant risks. One of the most pressing concerns is the rise of cybercrime. As more sensitive information is stored and transmitted digitally, individuals, businesses, and governments are increasingly vulnerable to hacking, data breaches, and other forms of cyberattacks. Cybercriminals exploit weaknesses in software and networks to steal personal data, financial information, or intellectual property. The consequences of these breaches can be devastating, leading to identity theft, financial loss, and reputational damage. You don’t need to look to hard at the current news to find an organisation which has suffered a cyber incident.

In tandem with cybercrime is the issue of data protection and privacy. In the digital age, vast amounts of personal data are collected by companies, governments, and online platforms, often without individuals being fully aware of how their information is being used. This has raised significant concerns about privacy, with many questioning whether individuals have enough control over their personal data. The rise of surveillance capitalism—where companies monetize personal data to drive targeted advertising—has sparked debates about ethical boundaries and the need for stricter regulations. High-profile scandals, such as the Cambridge Analytica case, where millions of Facebook users’ data was harvested without consent for political purposes, have highlighted the potential for misuse and the lack of transparency in data collection practices.

Beyond the security and privacy risks, the very immediacy and convenience that make technology so appealing can also have negative cognitive effects. The constant stream of notifications, emails, and messages can fragment our attention and make it difficult to focus on tasks that require sustained concentration. Research has shown that multitasking with technology can reduce productivity and impair cognitive function. This “always-on” culture, fuelled by smartphones and social media, can lead to stress, anxiety, and burnout, as individuals struggle to disconnect from the digital world.

Moreover, the overreliance on technology can erode essential cognitive skills, such as problem-solving, critical thinking, and memory. With information just a click away, individuals may become less inclined to engage in deep thinking or retain knowledge. The rise of artificial intelligence and machine learning also raises concerns about the future of human skills and the potential for automation to replace jobs, leading to economic inequality and social disruption.

Striking a Balance

Given the immense benefits and equally significant risks, it is crucial to strike a balance between embracing technology and mitigating its drawbacks. On the one hand, the conveniences of immediacy and efficiency are undeniable and have improved many aspects of modern life. However, these advancements should not come at the expense of privacy, security, or cognitive well-being.

One way to maintain this balance is through stronger regulations and policies that protect individuals’ privacy and data. Governments and organizations must implement robust cybersecurity measures and transparent data collection practices to safeguard against cybercrime and misuse of personal information. Additionally, educating the public about digital literacy and security can empower individuals to protect themselves online.

At an individual level, it is also essential to cultivate mindful technology use. Setting boundaries around screen time, practicing digital detoxes, and focusing on single-tasking rather than multitasking can help mitigate the cognitive impacts of constant connectivity. Encouraging critical thinking and problem-solving in education and the workplace can also help individuals develop skills that are less susceptible to automation.

Conclusion

Technology exists in a delicate balance between its undeniable benefits and the risks it poses. Immediacy and convenience have transformed society, making life easier and more efficient in many ways. However, these benefits come with the trade-offs of increased cybercrime, data protection concerns, and cognitive challenges. As we continue to innovate, it is vital to remain vigilant about the potential risks and take steps to mitigate them, ensuring that technology enhances rather than undermines our well-being.  I also wonder whether the drive for efficiency and immediacy is reducing the time for us to be human and to interact with other humans directly and in-person, as we have since the dawn of mankind, but that’s a whole other post!

Phones: a problem or a symptom?

I have recently been reading an interesting book on depression, Lost Connections by J. Hari, as this is something I feel I have struggled with at times, albeit this is a self-diagnosis rather than any form of clinical diagnosis.  Personally, I feel we all suffer depression to a greater or lesser extent, albeit maybe not clinical, at various points in our lives in response to events, challenges and other issues.   Within the book Johann points to societal issues being partly responsible for the increasing number of people suffering anxiety and depression, also talking about societal “junk values”.   This got me thinking about digital addiction and phone use, and my interest was further encouraged by a post from Mark Anderson where he provided some statistics in relation to phone use (see the post here).   But what if our addiction and increasing use of our phones, and other digital devices, isn’t the cause and the thing we need to seek to ban or reduce, but is actually the symptom of a different and broader issue?   Now I don’t propose to have solutions here but this post is about throwing out some thoughts and ideas.

Fame and likes

We have all at some point looked up to a famous person and thought, “I wish that was me”.   Whether it was a famous singer, an artist, or a movie star, I think we all generally want to be more than we are.  Now I am not sure if this want to be better, as measured by others, is intrinsic or whether it has been conditioned over time.   The adverts we consume on TV tell us we need to buy this body spray, or this car or that running shoe to be better so maybe we come to believe we need to be better.   Then in steps social media providing a measure of our fame, with the count of friends or likes, and we chase the thing we can measure rather than what we really want which is to be better.  And so we are forever on our phones seeking to post and share hoping to go viral and get all those likes, rather than looking towards ourselves, being comfortable in our own skins and seeking to be better but in our own eyes and on our own terms.   So is our excessive phone use a symptom of a need to have ourselves validated by others, rather than seeking to value ourselves?

Connectedness

I think it is important to acknowledge that we are still animals in some sense, albeit very intelligent ones, but we still have so much in common with the apes we came from back in the mist of time.   And as animals we need that connectedness, that social interaction of the herd or troop, and again in steps social media and our phones with connectedness on steroids.  Suddenly I am connected to friends, family and many more people, those with similar views and interests and this connection is constantly updating.    The issue here, as I have posted in the past, is that this online connectedness, although it appeals to our inner needs, it doesn’t truly address them so we find ourselves retreating from face-to-face, proper connectedness which will fulfil our needs, in favour of easier but shallow technology enabled connection.   We maybe therefore need to spend less time on digital connectedness and more time on actual connectedness.

Fear of missing out

I have already mentioned how our digital world is constantly updated and always on and this in itself breeds an issue, being we develop this fear of missing out (FOMO).   We are worried about missing out on important information, or the latest viral craze, so we seek to be constantly checking our devices for updates.   We might even become worried that there is something wrong when we haven’t received an update or our phone hasn’t buzzed for a period of time.    We build the habit of constantly checking our devices and constant vigilance to the call of our device for attention whether that be a buzz, a chime or a flashing screen.    But maybe there is another way and maybe we need to spend more of our time and our focus on being in the moment and experiencing our current environment, the company we are in, and the discussion, rather than bothering so much about the online conversations we may or may not be missing.

Efficiency and always connected

The world is only getting busier as we constantly seek to add more tasks and seek to get better.   If you were to look back on the last 6 months and list the extra things you are now doing I suspect we all would have at least a few items however if I was then to ask you to list the things you have stopped or been asked to stop doing, I suspect a shorter list, or maybe blank list would result.   If we do X this will make Y better sounds logical whereas if we DON’T do X this will be make Y better, doesn’t sit as comfortably with us.   And so we create this illusion of the need to be hyper-efficient, always on, always moving, and our devices are happy to play to this.   They facilitate us being connected, us collaborating, us communicating, anywhere, anytime.    But is this truly what life is about, to get as much done as possible and be constantly focused, or is there value in disconnection, quiet contemplation and meditation? 

Commercial interests vs. the user

In writing this post I couldn’t miss raising the issue of the device manufacturers and the platform developers.   They are commercial entities with shareholders.   They want profit and profit comes from keeping users buying their products and their services, keeping them using devices and staring at screens.   They want you alerted and increasingly are pushing further and further into our existence.    Most of our discussion on devices focuses on phones for example, yet now how many of us have wearables such that the notification is unavoidable being strapped securely to our wrists or in future, in the glasses we need to wear to see?   So these companies don’t have our best interests in mind and their approach to dealing with people’s concerns is to provide controls and data for the individual to use to manage their own usage.   But humans aren’t particularly great at doing what is best for themselves as individuals, just consider alcohol, smoking and more recently vaping.   And when faced with a societal push to stay connected, FOMO and much more, the companies must know that putting the control in the hands of individuals will see little progress, although it will allow them to say they did what they could while still reporting positive usage data back to their shareholders.  I think this is where society has to play a part rather than focusing on either the profit-focused companies, or the ill-equipped individual to solve the problem.

Conclusion

I suspect I could write much more on this topic and as I write this I can see so many opportunities for further research.    Rather than seeking to ban, which I am against, or manage, which I am much more supportive of especially in schools, do we need to ask the question of why we are all so quick to reach for our phones and digital devices?   If we consider our usage a problem, then surely we need to get to the why, the cause, as opposed to seeking to address the symptom which is the eventual usage.   Maybe even discussing this with our students will help?

My sense is that a large part of the issue is the values which society currently applies to us.   It isn’t enough to just be me but I have to attain status, I need to be hyper-connected, I need to work stupidly hard and efficiently, and I need to show other people all of this, and our devices deliver on these needs, or at least appear to.   As long as we continue to address this at an individual level, which tends not to work, we fail to get into the bigger problem but how do we bring about societal change?   One step at a time?   One blog post at a time maybe?

Some DigCit resources

Following on from my blog in relation to internet safety day I thought I would share some of the actual presentations I have used recently with students when discussing differing parts of internet safety. 

Now all the presentations are on the short side as they are designed to provoke thought and further follow up discussions with each presentation designed around a 5 to 10 minute assembly.   

I hope the presentations are useful or at least provide some ideas.   I am also open to any thoughts or ideas for other topics or areas which should be included in future presentations.

This session revolves around a tweet from a parody HRH Prince William account which was picked up by some UK radio broadcasters as fact where there was no evidence to support the figures quotes. Also the session looks at the possible impact from generative AI in relation to fake images or video.

This session is very much about asking the students if they feel comfortable with their technology use and then discussing ways that a balance might be achieved. It is also important to discuss how “screen time” is an overly simplistic measure and that all screen time is not equal.

This session focuses on binary arguments and how two opposite viewpoints can actually both be true or both be false. Some discussion of why people might seek to exploit binary arguments, social media algorithms and echo chambers is also included.

This session focuses on some examples of social engineering and how human habits can be used against us by malicious individuals. The key message is the increasing “sophistication” of attacks and therefore the need to be more vigilant and careful.

This session looks at data breached from sites and how this is leaked online. It may be worth getting the students to use HaveIBeenPwned if possible to see how many students already have data leaked on them online. The key closing point is that as we do more online we need to be aware of the resulting increasing risk.

The key feature of this session is the predictability of human choices in relation to passwords.You may wish to use the Michael McIntyre cyber video here or simply ask students about where the capital letter, number and symbol in their passwords might be.

This includes reference to an OSINT tool which allows you to identify the date and time of a photo based on the position of shadows within the photo;  This illustrates how even simple things might give away information about us.

It also contains a “pick a number” to illustrate how we can be easily influenced.   As the presenter you would stress the trackers slide and “14” to see if you can then encourage students to select 14 later in the presentation.If we can be that easily influenced then what might social media and other individuals be able to do with much much more data?

This session looks at public good vs. individual privacy and how these two issues may be at opposite ends of a continuum. The key is to show how we need to find a balance between these two extremes.

Safer Internet Day 2024

I thought I would put a post together to coincide with safer internet day, the 6th Feb 2024.    Safer internet day represents an opportunity to stop and recognise the importance of online safety however it is also important to recognise that our understanding of digital risks isnt confined to a single day but is something we should be constantly considering.

I will be honest and say that I generally feel we do not do enough in relation to digital citizenship, which is the broader concept which encompasses online safety, in schools.   Yes, schools have safer internet day, they have content in their PSHE education programme plus in their KS1, 2 and 3 Computer Science programmes, and for those students choosing to study computing or IT subjects at A-Level or in vocational qualifications, but it is limited content and this is against a backdrop of increasing use of digital tools and increasing sharing of data.   We believe basic maths and basic literacy are requirements for all; I believe basic digital citizenship should also be a requirement and a subject in itself.

So, if it was a subject what would the topics be?

I already try and deliver sessions for students throughout the year in relation to number of digital citizenship topics which includes:

Fake News

I think this is a very important subject given the ease with which fake images and even fake audio and video can now be created through the use of Generative AI.    Recent cases with fake Taylor Swift videos and fake Joe Biden audio are a case in point. How might we tell the fake from the real, but also what about those individuals who say or do something inappropriate only to claim they didn’t, and that the footage or audio is fake?    How do we establish truth in world where we can no longer believe what we see or hear?

Big Data

We are constantly given away data, and more than we realise.  And it isnt just about the data we give away, but also about the data which might be inferred from what we give away.    Consider where you live, the car you drive and where you shop for example;   How might this information help to infer something about your wealth or earnings?    What does your weekly shop say about you and your family? And remember it doesn’t need to always be right, it just needs to generally be right more than its wrong to have value.    Then there are the organisations willing to pay for your data or to sell your data on.   Might we get to a point where, through data, some companies know more about us than we know about ourselves and at that point, what is the potential for us to be influenced or even controlled.

Binary arguments and echo chambers

The medium used to communicate has an impact on the message, with this being all the more apparent on social media where things go viral, with agreement, or viral with disagreement so very quickly.  The medium shapes our views through its algorithms, connecting stories with those likely to engage either in agreement or in disagreement, thereby enhancing divides and encouraging most discussions to descend into binary arguments.    As you engage with social media, it will try to feed you the info you want to hear, which therefore tends to reinforce the views you already have rather than providing alternative viewpoints.   So in consuming information and news from social media we need to be conscious of how social media works and therefore how it might shape the news it presents and eventually our viewpoints.

Balance – Public Good and Personal Privacy

Balance as a concept is something I believe strongly in.   For ever advantage there is a corresponding risk or draw back.   And in some many decisions we operate on a continuum rather than with polar opposites.  Take public good vs. personal privacy for example.    We want to be safe so expect the police and intelligence services to monitor in search of terrorists and other threats.   Yet, we also want our individual privacy so to be free from monitoring.    Can we have both?     The answer is no, we need to find some balance between a “reasonable” level of surveillance and monitoring balanced out against a “reasonable” level of individual privacy.    Taking the discussion of encryption, the challenge here is that weak encryption is weak for all, so monitoring anyone is difficult without putting all at risk.   Now there are solutions here such as monitoring at the device level where encrypted communications need to be decrypted to display, however this is difficult as it requires access to the device.    We basically have an imperfect situation, and sometimes in this complex world we need to live with imperfect.

Cyber Security

As we use more digital tools, share more data and generally use technology more and more we need to be more and more conscious of cyber risks and how to remain secure.   This is in the accounts we use, the data we share, the use of MFA, but also in the devices we own including updating our devices such as laptops and phones, but also the increasing number of IoT devices we have such as smart plugs and voice assistants.   We need to give some consideration to cyber security in all purchases, and in each system or service we seek to use.  It may even be necessary to accept that every piece of technology used represents increased risk, so the question then becomes is the gain from using the service sufficient to outweigh the risk?

Addiction and Being Human

How many times have you seen a major event such as a new years fireworks display with people all holding up their phones to film the event, so all experiencing the event through their smart phone screen?   Or have you been on a train or in a restaurant and seen countless people staring at their phones?   Is this the way we want to live and does this change our experience of life?  Yes it might give us a nice video of the event which we can then go back to in future but how often do we do this and if we didn’t record the event would we spend more time interacting with those around us, with this resulting in something more memorable?    What does being human look like in this technology enabled, technology curated and technology filtered world?

Conclusion

The above are just some of the areas I discuss with students and I note I don’t have the answers as I spend a little too much time on digital devices, I share more data than I likely need to, etc.  What I do hope to do however is build awareness and start a discussion as this is I believe what matters.    We need to be thinking about the challenges and risks and ensure our students, our young people, are aware of them and are making educated decisions.

I hope everyone has an enjoyable safer internet day;  Stay safe online!

Digital Divides ?

The BETT Show got me once again thinking about the digital divides, and I am very careful to use the plural here as I believe there are many digital divides currently acting on our students.   Now I have been challenged in the past over the existence of a digital divide (note the singular here which I think is important) with evidence of widespread access to devices being one of the key points of challenge.  One piece of research, for example, suggested as many as 98% of UK 16-17yr olds owned a smartphone.    Based on this data almost all children have access to both a device and also internet access suggesting ubiquitous access and no digital divide however, although this may tick off the divide related to access to a device and also access to the internet, what about the other divides?

Its not the device that matters!

When looking at school technology strategy we have long identified that a strategy to simply put a particular device in staff and student hands doesn’t work.    Its not about having the device, although this is an important foundation, its about considering what it will be used for, how its use will be included in teaching and learning, what support is available in terms of technical support but also subject related technology use support, the overall culture of the school in relation to technology use, the confidence of teachers in using technology, etc.    In terms of students and the digital divide, there are similar issues.

Have it, but don’t use it here

One obvious divide for students relates to school technology strategy.   In some schools technology has a key part to play, so 1:1 devices might be available, class sets, or BYOD might be supported, but generally it is a case of technology is encouraged.    Other schools may have far more limited technology and may ban the use of mobile devices;   All of a sudden our ubiquitous access to devices and the internet isnt nearly as ubiquitous if students arent allowed to use their devices and no devices are provided while in schools.   Those students who are encouraged to use technology in school, across their lessons, benefit from lots of learning opportunities in relation to technology, while for those without, these opportunities don’t exist.

Supportive networks

For some students, use in school provides them teaching and support in relation to technology and its use through advice from teachers, support staff such as IT staff in schools plus also from their peers who like them are using technology within the school.    This support helps, and ongoing use also helps as it allows students to build confidence in the use of technology, which then supports experimentation with new technology or new functionality within existing platforms.    But this support isnt uniformly available with some students receiving far more than others.   And the issue of support extends beyond the walls of the school to home, where some students will benefit from engaged parents willing to discuss technology use, the benefits and risks, where for other students they may be left to their own devices, which may devolve towards doom scrolling social media apps.

Digital Citizenship

And in some schools there will be robust discussion of social media app and the broader issue of digital citizenship. Students will therefore be more aware of the risks and challenges associated with social media including issues around big data, influence, bias and echo chambers, etc. This will be in addition to the meagre amount of discussion which may be supported in PSHE lessons or within the computing science curriculum which might be all some students receive. Plus, where there is robust discussion, there is a greater chance for students to ask questions or seek support.  

Maybe you need more than a phone

We also need to recognise that the smart phone isnt always the best tool and sometimes we need a bigger screen, a keyboard and a mouse.   So, although ubiquitous access to a smartphone is a good start it isnt the solution.    A study looking at device access for homeschooled students in the UK found that slightly more than half of students had to share a device with others in the household for example.    Again, we have some students who benefit from their own device which they can personalise, use and build confidence with, and other students who do not have this benefit.

And then there’s the new tech; GenAI

So, from the above I hope I have highlighted some of the divides impacting on students and this is now further compounded by new technology such as GenAI.   In some schools this is being discussed and students are being encouraged to learn about and use GenAI solutions, but in other schools GenAI is out of bounds and banned, or the students simply don’t have access to the basic technology to properly explore GenAI.    For those students learning about AI, they are likely to be more confident and familiar with GenAI solutions they encounter as they exit school and either continue their studies or enter the world of work, whereas those who have been deprived of the opportunity will be presented with a steeper learning curve.

Conclusion

For me there are definite digital divides and I feel current development around GenAI is only going to widen these divides.    Access to a device and internet, the ubiquitous smartphone, is a good start but it is akin to giving devices to teachers with no professional development or support.   They might get some use of the devices but never what is truly possible.  And looking at students and the smartphone I suspect what they might get out of their devices will be a lot of YouTube and TikTok content rather than something more meaningful.   

We very much need to seek to address the digital divides and for me the place we need to start is with the basic building blocks in terms of infrastructure and devices in schools.   Only once this is reasonably consistent across similar types of schools can we then move on to tackle other digital divides.

References

UK: children owning mobile phones by age 2023 | Statista

Over half of home-schooled children in the UK have only shared access to computers – Institute for Social and Economic Research (ISER) (essex.ac.uk)

Digital Citizenship

Its digital citizenship week this week so I thought I would share some thoughts. Now, I have discussed and raised the issue of the need for more time in schools to discuss digital citizenship.   Whether it is discussing the increasing need to be aware of cyber risks, or the increasing amount of data we are now sharing online or the increasing risk of our behaviours being influenced and manipulated by the tech tools we use, they all need discussion.    Schools and colleges are looking to prepare students for the uncertain, but clearly digital futures they face, but still the focus is on narrow coverage of “online safety” when the risks now extend way beyond the content being covered.

And all of this is before generative AI made its appearance and became so publicly available late in 2022.  Suddenly fake news is much easier to accomplish through generative AI tools that can easily modify content in terms of the video or audio, both being quick to achieve and also to achieve convincingly.    Suddenly the phishing emails which were often laden with spelling errors or design issues, can be fed through a generative AI solution such that the resultant output is convincing in its styling plus free from grammar and spelling errors.   In terms of influencing people through social media, generative AI allows for content creation to be automated with each piece of content being “unique” but with the common influencing message, far quicker than was possible previously.    We also have the issue, that as we all start to use more and more AI, such as the excellent generative AI tools available, we leak yet more data online, where the generative tools online are more powerful than ever in inferring yet further data.   At an event I attended recently it was suggested that if you fed your prompts from generative AI back into a generative AI solution and asked it to profile you it would do decent job of working out things like age, career, education, etc just based in the info you already put into generative AI tools.

So maybe post the free availability of ChatGPT and subsequently of so many other AI tools, or tools where generative AI such as ChatGPT is embedded, it becomes all the more important to discuss digital citizenship with our students.   And maybe generative AI, if it frees educators up from the more administrative and basic tasks of education, provides both the issue and the solution.  Maybe if generative AI and the AI solutions yet to come free us up from the mundane and the basic, maybe it will finally provide time and resources to cover digital citizenship at a time where it may be all the more important.

The path of the world is towards increasingly digital lives with the pace of digital technology advancement being quick.   Regulation and governance is slow by comparison leaving us with a need to fill the void.   I don’t have the answers for the future although I am positive as to the potential of technology to aid, enhance and even redefine our lives, however with this there is always a balance and therefore risks and challenges.   This is where digital citizenship in schools comes in, in providing opportunities for the risks and challenges, both current and potential future risks and challenges, to be discussed and explored.   We need to develop students who are aware and questioning of technology implications, rather than students who blindly adopt technology without consideration for the future.   I believe we have a long way to go to address this issue but every step, every additional discussion, every assembly, every lesson including reference to digital citizenship being an additional step in the right direction.

Image courtesy of Midjourney

KCSiE: Filtering and Monitoring

I was recently reviewing the new Keeping Children Safe in Education (KCSiE) update including the main changes which relate to filtering and monitoring.     I noted the specific reference to the need to “regularly review their effectiveness” and also the reference to the DfEs Digital Standards in relation to Filtering and Monitoring where it mentions “Checks should be undertaken from both a safeguarding and IT perspective.”   

The safeguarding perspective

From a safeguarding point of view I suspect the key consideration is whether filtering and monitoring, and the associated processes, keep students safe online.    So are the relevant websites or categories blocked and do relevant staff get alerts and reports which help in identifying unsafe online behaviours at an early stage, whether this is attempting to access blocked sites or in accessing sites which are accessible but considered a risk or indicator, and therefore specifically monitored and reported on.

From safeguarding perspective it is very much about the processes and how we find our about students accessing content which may be of concern, or attempting to access blocked content.   From here it is about what happens next and whether the holistic process from identification via fileting and monitoring, through reporting to responding is effective.   Are our processes effective.

The IT perspective

From an IT perspective, in my view, it is simply a case of whether the filtering and monitoring works.   Now I note here that no filtering and monitoring solution is fool-proof, so I believe it is important to acknowledge that there are unknown risks including new technologies to bypass filtering, use of bring your own network (BYON), etc.    Who would have thought a year ago about the risk of AI solutions to create inappropriate content or to allow students to bypass filtering solutions?

Having acknowledged that no solution is perfect, we then get to testing if our solution works.  Now one tool I have used for this is the checking service from SWGfL which can be accessed here.   It checks against 4 basic areas to see if filtering is working as it should.    

I however wanted to go a little further.   To do this I gathered a list of sites which I deemed as appropriate for filtering, gathering sites for each of the various categories we had considered.   I then put together a simple Python script which would attempt to access each site in turn before outputting whether it was successful or not to a CSV file for review.   The idea was that this script could be executed for different users and on different devices;  E.g. on school classroom computers, on school mobile devices, for different student year groups, etc.     The resultant response, if it matches our expectations for what should be allowed or blocked, allows us to evidence checking of filtering from an IT perspective, plus allows us to identify where there might be any issues and seek to address them.     

You can see the simple script below where it tests for social media site access;  You can simply add further URLs to the list to test them:


import requests

website_url = [

              “https://www.facebook.com”,

              “https://www.twitter.com”,

              “https://www.linkedin.com”

]

f = open(“TestResults.csv”, “w”)

for url in website_url:

              try:

                           request_response = requests.head(url)

                           status_code = request_response.status_code

                           website_is_up = status_code == 200

                           print(website_is_up)

                           f.write(url + “,Accessible” + “\n”)

              except Exception:

                           print(url + ” – Site blocked!”)

                           f.write(url + “,Site blocked!” + “\n”)

f.close()


Now the above may need to be changed depending on how your filtering solution works.   I did consider looking at the URL for our blocked page however as the above worked I didn’t have to.  My approach focused on the return codes however if you do need to work with the an error page URL I suspect this article may be of some help.

Conclusion

Before I used the script for the first time I made sure the DSL was aware;  I didn’t want to cause panic in a test student account which seemed to be hitting lots of inappropriate content over a short period of time, and in sequential order.    The script then provided me with an easy way to check that what I thought was blocked, was being blocked as expected.  As it turned out there were a few anomalies, some relating to settings changes and others to changes to websites and mis-categorisation.    As such, the script proved to be a little more useful than I had initially expected as I had assumed that things worked as I believed they did.  

The script could also be used to test monitoring, by hitting monitored websites and checking to see if the relevant alerts or reported log records are created.  

Hopefully the above is helpful in providing some additional evidence from an IT perspective as to whether filtering and monitoring works as it should.

Automation but at what cost?

While driving into work the other day there was some heavy fog in places and I noticed cars driving without their lights on.  It was at this point I realised my own headlights weren’t on, so quickly turned then on.   I found myself wondering why I hadn’t turned them on in the first place?    The answer is that I had become a little bit complacent of the automatic lighting function in my car which turns on my lights as and when needed, making life so much more convenient and less effort for me.   This got me thinking about what the drawbacks might be for where automation is used, particularly in relation to software based automation solutions such as PowerAutomate.  I suspect the use of automation in schools, especially given new AI advances, will become all the more common as we seek to try to address workload challenges, so what are the possible drawbacks or challenges:

Initial costs:

Implementing automation comes with a cost.   This might be in terms of infrastructure, software licensing or simply in terms of the time taken to design and implement an automation solution.    There will also be costs ensuring the creation of appropriate documentation for the solution where documentation is important but also something which is often overlooked.   We need to be aware of these costs although generally they are outweighed by the longer term gain.

Maintenance:

There may also be costs associated with the maintenance of an automation solution.   This will be reliant on appropriate documentation and staff with the appropriate skill level to understand and maintain the automation solution.   This may also be quite difficult as sometimes an automation solution may be running for a significant period of time, possibly years, before any issues arise, where, when they do arise, staff involved in the solution may no longer easily recollect how the solution was designed thereby making it all the more difficult to resolve the issues. 

Lack of flexibility:

Automated systems are designed to perform specific tasks and may not be easily adaptable to changes such as changes in school processes or the required outcomes.   There is also a challenge that as we seek to improve we may increasingly complicate the automation solution which in turn may make it more fragile and likely to fail plus more difficult to maintain.   I note complexity is a current concern of mine in that this complexity, although allowing us to achieve more or be more efficient, often leads to more moving parts, more complex processes, etc which therefore mean there are more opportunities for things to go wrong.   Continually increasing complexity must at some point reach a threshold at which it becomes unsustainable and my view is that we need to consider this in advance of this point, and therefore need to seek to identify and focus on what matters and attempt to simplify things.

Dependence on technology:

This is the issue which started this post, my complacency or dependency on the automation to turn my headlights on.   Due to the availability of automation I had stopped checking my lights.    In schools we may implement automation solutions to notify staff on specific events, etc, however we need to be careful that we don’t become reliant on this automation.    In the event an issue occurs and automation fails we still need to operate.    We also need to be able to identify when the automation has failed and this might not be as easy to identify is it was in relation to the lights on my car.   This for me is one of the main issues in establishing a balance between improved ease and convenience through automation, and increased reliance on, and therefore reduced ability to work without, automation.

Decreased human interaction:

Automation can lead to a decrease in human interaction and communication, which can have negative effects on workplace culture and employee morale.  So, although there may be efficiency benefits it will likely reduce the need to talk to, email or otherwise communicate with others where we are social animals and rely on social interaction.   At a time when wellbeing is such a key factor in education, and where social interaction with other human beings is a critical part of our wellbeing as social animals, we need to carefully consider the balance here.

Conclusion

While automation can bring many benefits such as increased convenience, ease, efficiency, reduced long term costs, and improved quality control, there are also potential drawbacks.    We therefore need to be careful of the balance between the positives and the potential drawbacks of automation.    As my car journey proved, automation isn’t without its potential issues, with over reliance being only one of these.

End to end encryption: Ensuring privacy or increasing the risk of harm?

There have been some recent calls for Meta to refrain from adding end to end encryption to the messaging functionality in some of their apps, in relation to safeguarding.    It makes initial sense to consider the potential risk of harm to children and others through harmful online content or contact.   How can agencies, schools and individuals protect people, including the young, from harmful content or contact when they are unable to identify the content due to encryption?   How can criminal individuals be prosecuted when key evidence is inaccessible due to being encrypted?   The challenge here however is establishing some of the possible implications of either weakening or removing encryption as like most things there is a balance and improvements in monitoring and detection through removed or weakened encryption will result in other less positive counter implications.   I note that sticking with the current level of encryption, where technology moves on and where criminal skills and approaches continue to develop likely equates to a weakening over time meaning we can either continue to strengthen our approach or, by doing anything else, reducing or doing nothing, choose to effectively weaken encryption. So, what are the general implications should we choose to reduce or remove encryption, rather than seeking to strengthen it?

Increased vulnerability to cyber attacks

Encryption is a key tool used to protect data and information from unauthorized access. Weakening or removing encryption makes it easier for cybercriminals to break into systems and gain access to sensitive information which in turn puts individuals, including children, more at risk.  At a time when individual privacy is such a hot topic anything which may reduce or put at risk this privacy is of concern.

Increased surveillance

Weakening encryption can also make it easier for governments and other organizations to monitor online activities and communications.  Now it may be that this monitoring is done in our interests, in the interests of safeguarding for example, but there is the potential for data or monitoring solutions to be mis-used.   It could be used for invasive monitoring and surveillance, to identify individuals based on beliefs or political beliefs for example.   It may be used to challenge or silence views counter to the government or intelligence agencies.   It may be that the data gathered allows for other data to be inferred where this then violates individual privacy and freedom of speech.  Or it may be that these systems used correctly and ethically suffer data breaches resulting in the data or systems being misused for criminal or unethical purposes.   Increased surveillance capability thorough weakened encryption has significant potential as a risk to individual privacy.

Loss of trust

Weakening encryption can erode public trust in online communication and commerce. This in turn can lead users to be less likely to trust systems the digital systems which we increasingly require in our day to day lives.    The potential impact should we no longer be able to trust our online communications and collaboration platforms, our online banking, online shopping, etc would be very significant indeed.    It may also lead individuals to seek to use systems in the darker recesses of the internet where these systems may be perceived as more secure and outside government monitoring or surveillance, but where other implications or risks may exist.

Negative impact on businesses

Related to the above, weakening encryption could also have a negative impact on businesses that rely on secure online communication and transactions. This includes e-commerce sites, financial institutions, and healthcare providers.    If encryption is weakened or removed then users of online services are more at risk, plus the services themselves are also more at risk.   Individual users may lose data and become subject to fraud or other cyber crimes while the breached organisation suffers reputational damage, legal claims for compensation plus the overall cost of recovery following a cyber incident.    Basically, no-one wins, other than the cyber criminals that is. 

Conclusion

The issue here is one of balance, the balance between individual privacy and protecting individuals from harm online, where providing privacy will provide the individuals who may cause harm with protection which means that harm is more likely.   But where providing protection against online harm will weaken an individual’s privacy even where their motivations and actions are honest and good.    Sadly, we cannot provide privacy online for some but not for others.   Either privacy and security it built into systems, or it is not, as we have no way of identifying those who may or may not cause harm.   

There is also an issue of pragmatism.   If we reduce the privacy level of some services by not enabling end to end encryption for example, then users, and particularly those seeking to do harm, will simply move to those services which provide more security and provide end to end encryption.    I have seen it myself in the unknown user who DMs an individual on a major social media platform, before, after a short series of messages, suggesting moving to an alternative “better” platform as they know this is better suited to protecting their privacy as the seek to go about their likely malicious aims.    

Overall, there is no perfect answer here.    I think technical security and privacy is key to the digital world we live in but also we need to keep individuals safe online.   Sadly, these two requirements are largely at opposite ends of a continuum.   I suspect a reduction in technical security would have wider implications on the world than increased security although I note it isn’t a zero-sum game.  Personally, I think we need to err-towards greater encryption but while seeking to mitigate the safeguarding risk as much as reasonably possible by increased discussions, training and education regarding safety and risk online.    Not a perfect answer, I know, but as I said, there is no perfect answer and anyway, we don’t live in a perfect world.