Half term and wellbeing

So the 2nd half of the half term begins and I spent my half term finding some time for myself including a bit of a holiday with some excellent company, but with a little bit of an intrusion from work.   I went through some traumatic travel situations, as I often do, some poolside trauma and have had a difficult family issue arise.   All this in a half term.   So it got me thinking about wellbeing, which made me put fingers to the old keyboard and put this post together as I started the journey on my second of four trains for the day.

Wellbeing?

One of my first thoughts on wellbeing is simply the complexity of it.   It’s a simple word and its easy to say your wellbeing is ok or not, or for wellbeing to be put on the agenda for weekly meetings, but what is wellbeing?   Now am not going to say I have done any real research on this but for me there are a number of aspects including our physical fitness, our mental wellbeing, a spiritual element and well as elements related to stress, both positive and negative, agency, purpose, family and much more.   It’s a bit of a complex soup of things yet I feel that often when organisations look at wellbeing they look for simple solutions where none really exist.

Work and wellbeing

I think this area is particularly complex.   As mentioned in the intro, work intruded a little in my holiday abroad, through email notifications I saw on my phone which led me to feel the need to act and respond.   I also had an emotional response to the message which had an impact where I had to that point being getting quite relaxed and very much in holiday mode.   Now it is important to note that there was no explicit need in the emails for my response but I felt the need to respond.   I had the agency to respond or not to respond, and had the agency to disable notifications had I wished.   There was a lot in my control.   But equally I feel there is an increasing narrative around the need to be efficient, to be effective, and therefore having identified an issue via a notification I felt a partially intrinsic but partially extrinsic need to act, independent of being on holiday or not.   Technology facilitates many benefits but putting these notifications in front of me, may be the flipside, and negative side of this.

Stress

The word stress often brings with it a negative image.   With the M5 closed, and being my main route to the airport, with all surrounding roads clogging up as everyone else, like me, sought to find alternatives, it certainly wasn’t positive stress I was experiencing and it wasn’t how I anticipated my holiday to begin.      But equally the November ahead of me and the number of events which I will be contributing to, equally represents stress but I see it more as a positive challenge, challenging me to better prioritise, to network, to us resources at my disposal and to grow professionally.    As I write this it gets me thinking of “desirable difficulty”.   It therefore worries me that we seek convenience, ease and the status quo all too often.   There is negative and avoidable stress that we should rightly seek to avoid.   But equally there is a lot of stress which we should seek to embrace.    From my holiday one such stress was my irrational fear of bodies of water, where my gondola ride in Feb caused some distress, and where this time it was a pool by the apartment.   Like with the gondola, I once again embraced the stress in the hope of growing, this time entering the pool, albeit in the shallow end.   That’s the first time I’ve been in a pool in over 20 years, and I hope is another step in trying to grow personally and get comfortable with that which makes me uncomfortable, and which stresses me out.    I think this also links back to my previous comment on agency, in that I can control this stress and how I engage with it.    I feel a lot of the stress in our lives, we can either control or at least control how we respond, and that it the key, to avoid the emotional hijacking which often arises from stress.   Maybe if I had avoided such a hijacking I may not have responded to the email notifications, and may have left it for others to pick up or for me to pick up on my return.   Who know how that may have turned out?

Wellbeing initiatives

I have a particular view on wellbeing initiatives in that I feel they are largely ineffective.   They often represent activities which can be accessed and sometimes, unfortunately, which are made compulsory.   These simplistic measures don’t do much to address the complexity of individual wellbeing.   I will however note, taken at a macro level, they may have a positive average impact on a wider staff body, but I write this from the point of view of an individual.    The initiatives I have seen so far fail to deal with the complexity of wellbeing.  For myself, at the moment, the family issue which has recently arisen is definitely not going to be addressed by any activity my school can put on or arrange.   It can however be addressed by a strong open and warm organisational culture, complete with appropriate line management structures.    It does make me think that we should spend less time on wellbeing discussions and initiatives and more time going back to the basics of leadership and management, and making sure staff feel supported, positive challenged and engaged, fell they have agency, etc.    If there is one other thing that I think schools and colleges need to do it is to provide the time to stop and reflect as this long train ride is providing me on this Sunday afternoon.

Conclusion

Wellbeing isn’t simple.   It’s a bit of a soup of factors.    As I sit at my 2nd last station of this leg of my trip I wonder whether I would benefit from a bit of a force analysis of the internal and external factors which impact my wellbeing and about what I or other can do to support my wellbeing.   Might that help to unpick things?  I also wonder if thinking about wellbeing when discussions of wellbeing and stress are often framed in a negative fashion, may bias me towards more negative views as to my own personal wellbeing.   I am not sure, although if there is one thing I am sure of it is that an analysis would take some time, and if there is one thing that would positively impact my wellbeing it is to have the time to stop and reflect.   

Now how do we build that into the school programme and how do we support all to stop and reflect?  

And also how do we manage the narrative around wellbeing to reduce the largely negative framing which I feel currently exists?

Technology: Balancing Benefits with Risks

In our modern era, technology permeates every aspect of our lives, transforming how we work, communicate, and live. The advent of the internet, smartphones, artificial intelligence, and other technological innovations has brought unprecedented convenience and immediacy, significantly improving efficiency in countless areas. However, this rapid advancement is not without its downsides. As we become increasingly reliant on technology, we must grapple with the risks and challenges that arise, including cybercrime, data protection concerns, and the detrimental effects on our ability to focus.    So how do we find an appropriate balance?

The Benefits: Immediacy and Convenience

One of the most significant advantages of modern technology is the immediacy it affords. The ability to access information instantly, communicate across vast distances in real time, and perform tasks that once took days or weeks in a matter of seconds has revolutionised the way we live and work.   This immediacy extends beyond communication to other areas, such as online shopping, where you can order products with just a few clicks, expecting next day or even same day delivery, or the healthcare sector, where telemedicine enables patients to consult with doctors without needing to visit a clinic in person.

Convenience is another major benefit of technology. The rise of smart devices and automation has simplified tasks that used to require considerable effort. For instance, smart home systems can control lighting, temperature, and security, while virtual assistants like Siri and Alexa can perform tasks such as scheduling appointments, sending messages, or even ordering groceries. In the workplace, technology streamlines operations, with software automating repetitive tasks, allowing employees to focus on more complex and creative aspects of their jobs.   While in schools AI can help students and teachers create, refine or assess materials, or can help with translation, simplification and other process which support or even enhance learning experiences.

These conveniences and immediacy should improve quality of life, offering more time for leisure and reducing the stress associated with many day-to-day tasks however my sense is that they often just allow for more to be expected and reinforce the “do more” and efficiency cultures which I feel exist.

The Risks: Cybercrime, Data Protection, and Cognitive Impact

The advantages of immediacy and convenience come with significant risks. One of the most pressing concerns is the rise of cybercrime. As more sensitive information is stored and transmitted digitally, individuals, businesses, and governments are increasingly vulnerable to hacking, data breaches, and other forms of cyberattacks. Cybercriminals exploit weaknesses in software and networks to steal personal data, financial information, or intellectual property. The consequences of these breaches can be devastating, leading to identity theft, financial loss, and reputational damage. You don’t need to look to hard at the current news to find an organisation which has suffered a cyber incident.

In tandem with cybercrime is the issue of data protection and privacy. In the digital age, vast amounts of personal data are collected by companies, governments, and online platforms, often without individuals being fully aware of how their information is being used. This has raised significant concerns about privacy, with many questioning whether individuals have enough control over their personal data. The rise of surveillance capitalism—where companies monetize personal data to drive targeted advertising—has sparked debates about ethical boundaries and the need for stricter regulations. High-profile scandals, such as the Cambridge Analytica case, where millions of Facebook users’ data was harvested without consent for political purposes, have highlighted the potential for misuse and the lack of transparency in data collection practices.

Beyond the security and privacy risks, the very immediacy and convenience that make technology so appealing can also have negative cognitive effects. The constant stream of notifications, emails, and messages can fragment our attention and make it difficult to focus on tasks that require sustained concentration. Research has shown that multitasking with technology can reduce productivity and impair cognitive function. This “always-on” culture, fuelled by smartphones and social media, can lead to stress, anxiety, and burnout, as individuals struggle to disconnect from the digital world.

Moreover, the overreliance on technology can erode essential cognitive skills, such as problem-solving, critical thinking, and memory. With information just a click away, individuals may become less inclined to engage in deep thinking or retain knowledge. The rise of artificial intelligence and machine learning also raises concerns about the future of human skills and the potential for automation to replace jobs, leading to economic inequality and social disruption.

Striking a Balance

Given the immense benefits and equally significant risks, it is crucial to strike a balance between embracing technology and mitigating its drawbacks. On the one hand, the conveniences of immediacy and efficiency are undeniable and have improved many aspects of modern life. However, these advancements should not come at the expense of privacy, security, or cognitive well-being.

One way to maintain this balance is through stronger regulations and policies that protect individuals’ privacy and data. Governments and organizations must implement robust cybersecurity measures and transparent data collection practices to safeguard against cybercrime and misuse of personal information. Additionally, educating the public about digital literacy and security can empower individuals to protect themselves online.

At an individual level, it is also essential to cultivate mindful technology use. Setting boundaries around screen time, practicing digital detoxes, and focusing on single-tasking rather than multitasking can help mitigate the cognitive impacts of constant connectivity. Encouraging critical thinking and problem-solving in education and the workplace can also help individuals develop skills that are less susceptible to automation.

Conclusion

Technology exists in a delicate balance between its undeniable benefits and the risks it poses. Immediacy and convenience have transformed society, making life easier and more efficient in many ways. However, these benefits come with the trade-offs of increased cybercrime, data protection concerns, and cognitive challenges. As we continue to innovate, it is vital to remain vigilant about the potential risks and take steps to mitigate them, ensuring that technology enhances rather than undermines our well-being.  I also wonder whether the drive for efficiency and immediacy is reducing the time for us to be human and to interact with other humans directly and in-person, as we have since the dawn of mankind, but that’s a whole other post!

InTec IT innovation in education

This week saw me taking a trip to Mercedes World to speak at the InTec IT Innovation in Education event in relation to esports and also to host a little esports round table.   Now as usual my travels weren’t without their issues which started from the outset with the car park at the station being full, so no spaces, and then was promptly followed by a delayed train meaning I missed my connection.    I do sometimes wonder why I continue getting the train however I suspect, if I drove instead, there would just be significant traffic jams plus I wouldn’t be able to work or have a beer in the process of travelling.   As it was the already long journey took just over 5 hours to complete.

So as to the event itself.    The first topic covered was AI in education and in particular Microsoft’s Co-Pilot.   Now this session focussed on the paid version of Co-Pilot where it exists in Word, PowerPoint, Outlook, etc, rather than the free version.   The capabilities are impressive as was evidenced by the demo video which was worked through however two challenges currently exist in schools.   One is that of cost, with a cost of around £25 per user per month the scalability of CoPilot in its paid form across whole school staff bodies is rather limited.   That said it could be issued to key users.   The other issue is that of data protection and data security in relation to how CoPilot may surface data which it shouldn’t but where permissions and labelling of data has been historically poor.    Now an example I used here, and experienced recently albeit not actually involving copilot, involved a poorly configured MS Team with data pertaining to a trip.   Permissions made the team available to all within the organisation, including students.   Now in the past this wouldn’t have been a problem as students would either need to find the link or get very lucky in stumbling across the Team however in this case the AI in office 365 which tries to predict what might be useful, surfaced some files from this team following a number of staff accessing said files.    Office 365 was just presenting “this file might be of interest” however surfaced information which wasn’t meant to be available to students.    In a world of CoPilot this is likely to happen all the more often and present significant potential risk.

Next up was a discussion on cyber security and safeguarding.   I liked the strong linking here between safeguarding, which is rightly viewed as critical, and cyber security, which is often shown lesser consideration.    It may be that the best way forward in terms of schools and cyber security is to view it as an aspect of safeguarding in keeping student and staff data safe and secure, and through this protecting them from potential harms.   And isn’t protecting student from harm exactly what safeguarding is about?

During the lunch break I got my hands on a very nice Sim racing rig and got to do a bit of racing.  To start with I didn’t do too well, treating the pedals like an Xbox controller, with a brake and accelerator pedal with an up and down position and nothing else.  Cue, spinning off the course and missing corners.  I joked with one of the Mercedes staff that I was driving a lawnmower given the amount of time I was spending on the grass.    Later I started to get a better feel for things and for being more careful with my acceleration and braking, at which point I started to make gradual improvements, eventually getting my lap time down below 1 minute and eventually coming 5th on the leader board.

After lunch there were sessions on infrastructure and IT planning.   I think the key messages were the importance of a modern infrastructure to support the increasing number and differing types of devices, including VR headsets and 3D printers among many other items.  Also, the need to plan and plan early.   This always makes me think of failing to plan as planning to fail, however in this case its not just about planning but about planning early to allow time for those things we cant predict.

My session was largely on esports, talking about how easy it is for schools to get involved with esports plus about the potential benefits in terms of soft skills development but also in terms of the potential career pathways which esports, and the soft skills it helps develop, might open for our students.   I still sense that esports continues to be adopted more by the Further Education colleges than it does within schools, and I feel this continues to be a shame as the benefits are not limited to those 16+ year old. 

My session also had a second topic being the ISBA Technology Survey.   Now I led on the development of the 2024 survey and resultant report, picking up from the work of Alan Hodgin and Ian Philips who developed the 2018 survey.     I continue to feel that technology changes so fast that no school or staff in a single school can effectively adapt and therefore we need to seek collective solutions.   To that end the ISBA Technology survey is about gathering data and presenting baseline information to schools on how technology is being used across schools, to help in comparison and in planning.   

Conclusion

The event was very enjoyable and the Mercedes World venue was perfect, especially given the opportunity to get some Sim racing done before then presenting on esports.   It was also a great opportunity, like so many other similar events, to network and share thoughts and ideas, including getting to catch up with a few colleagues from other schools where I haven’t seen them in person for a number of years now.

AI does continue to be a common topic in education circles at the moment, and this event was no different, however I am increasingly seeing discussions of esports;  This is something I find very heartening and something which I hope continues.   It would be great to see more and more schools get involved in esports, helping students develop the soft skills which esports support, plus introducing them to the many career paths which esports links to.  

Tech vendors should do more?

There is a lot of discussion in relation to how tech vendors and particularly big tech vendors need to do better, whether this is in relation to data protection, online safety, addressing fake news and many other considerations.    A recent presentation by Laura Knight at FutureShots24 where she spoke of the finite and infinite games, and of Simon Sinek’s book, “The infinite game”, got me thinking about this again.

Tech vendors need to sort it

Firstly it is important to acknowledge the benefits of technology;   The tools we have and use are there as they are useful and the tech companies that continue to operate are there as we as users choose to use their solutions, but there are also challenges and drawbacks associated with most technologies.    It is pretty clear that tech vendors need to do more to address the various challenges and risks which come about as a result of their products.    They provide a tool, whether it be a productivity suite, a social media application or a generative AI tool, among many others, with many people using these tools appropriately and for good, however, there are also then those who use these tools for ill, for criminal, unethical and immoral purposes.    Now I have blogged on this before, how tools are neither good or bad, but it is their use which is good or bad, however, the challenge is that through technology the resulting impact is magnified.   I have talked of a hammer as a tool, and how it could be used for assault, but unlike a hammer, a maliciously used social media tool can impact hundreds or thousands of people at once; the potential impact of the tools is much broader.   So, from this, it seems clear that tech vendors need to consider this negative impact and seek to mitigate the risk in the design of their platforms and through their processes.

The key here is that we are not really looking at these tools, but at their impact on wider society.   Society will continue, for good or for ill long into the future.   It is an infinite game.    Long after I am worm food, society will continue.   Likely long after many of these tech platforms have been and gone (think MySpace, Friends Reunited and the likes) society will continue.

And so, we look to rules and to laws to provide us with the frameworks and protections, where these rules and laws will exist long into the future, although they may evolve and be adjusted over time.    Sadly, though these laws and rules are designed for the long infinite game and therefore are slow to change, relying on established processes and methods not designed for the quick changing technological world we find ourselves in.  

With laws unable to keep up we find ourselves complaining that the tech vendors need to do more, and this is likely the case but the tech vendors know their time is limited as they may be dispatched to the bin should the next viral app come along, so they don’t want to expedite this through making a safer but less usable or less enjoyable or less attractive or addictive platform.   We have a problem!

But the tech companies are important

The tech companies are driven by profit as they are after all money-making companies with shareholders to answer to.   That said, many of the big tech companies do try to establish the moral and ethical principles by which they operate.    It is their drive for money which leads them to “move fast and break things”, to innovate and disrupt as they seek to find the next big thing and the corresponding profits which come with it.   And we need this innovation.   If we left innovation to governments, their processes, laws and rules would make the process of innovation so much slower than it is while it is in the hands of tech companies.  I suspect we would be still using 5 ¼” floppy discs at this point! 

The tech companies play the finite game, knowing that in this game there will be winners and losers so moving fast, disrupting and innovating is the only way to avoid being confined to the technology bin of history; think the polaroid camera, the mini-disc, and the platforms I mentioned earlier.    So, if the choice is spending longer to create a safer platform, but possibly being 2nd to the market with a product, or getting it out quickly and being 1st but then having to try and address issues later on, closing the gate after the horse has bolted, it seems pretty clear which the tech companies will choose.    Being 1st means survival while being 2nd might spell doom.

Solution?

I am not really sure that there is a solution here, or at least that there isn’t a perfect or near perfect solution.    Things will go wrong, and when they go wrong we will be able to highlight what could have or should have been done by tech vendors, governments or individuals to prevent the outcome.  But we have to remember we are dealing with technology tools operating at scale, and just take TikTok for example and its approx. 1 billion monthly users.    We haven’t yet banned cars but car accidents continue to happen!

Tech companies will continue to focus on the finite game and on maximising profit for their shareholders and on remaining viable, while politicians will also play the finite game, focussing on policies and proclamations which are more likely to be psotively received and to keep them in power, or help them to power.    But the world and society is an infinite game where what we do now may impact how things are for future generations.

I think we need to be pragmatic and I also think its about partnership and working together.  If governments, tech vendors and user groups can work together, discuss the benefits, the concerns and the issues, maybe we can make some progress.   Maybe we can find the best “reasonable” options and the “good enough”.     And I note, I feel some of this is already happening within some companies.     I suppose my one conclusion is simply that it isn’t for tech vendors to do more, it is for us all to do more, tech vendors, governments, schools, parents and adults more broadly, communities, and more.    And if we can do it, discuss and explore, find and test solutions together then maybe we can start to address some of the challenges.

Who poisoned the AI?

One of the challenges in relation to Artificial Intelligence solutions is the cyber risk such as that presented through AI poisoning.  When I seek to explain poisoning the example I often use is of an artist who sought to keep traffic away from a particular street.   To do this he simply purchased a number of cheap smartphones, put them in a little trolley and then walked this trolley slowly down the chosen street.    To Google Maps the fact a number of smartphones were progressing very slowly down a street was interpreted as a traffic jam or accident and therefore Google maps sought to redirect people away from the street.   Basically, the individual had poisoned the AI data model to bring about a generally unwanted outcome, at least from the point of view of Google Maps.

Poisoning might take a number of forms, such as through the input data received by the AI such as the position information from the phones, or through the prompts made to a generative AI solution or through the training data provided, including where this might include the prompts.    The key is that the AI solution is being manipulated towards an output that wouldn’t normally be anticipated or wanted.  And there are also concerns from a cyber security point of view in relation to poisoning being used to get AI solutions to disclose data.

That said I previously read an article in relation to AI poisoning but where the poisoning was being presented as a solution to a problem rather than a risk.   In this case the problem is ownership and copyright of image content, where an AI vendor might use such image content, scraped from the internet often without permission or payment to the creator, and used to train the AI.    The concern from copyright owners and artists is that they are creating works of art, images, etc, but as generative AI solutions are fed this data, the AI solution either copies elements of their works, or could even be asked to create new works, but in their style.   And given the creator is receiving no remuneration for the use of their works in training an AI, plus that the AI might lead them to receive less business, they are concerned.   Roll in Nightshade, a solution for poisoning an image.   Basically, what the solution does is to change individual pixels within an image, where this isnt perceptible to the human eye, but where it will influence an AI solution.   The poisoned images therefore negatively impact the functionality of AI solutions which ingest them into their training data, but while still be totally acceptable from a humans point of view.

The above highlights technology and AI as a tool;   Poisoning can be used for malicious purposes but in this case can be used positively to protect the copyright of image creators.    The challenge however is that this technology for poisoning images will likely lead to AI solutions either capable of identifying and discarding poisoned images or AI solutions which are tolerant to poisoned images.   It will end up as a cat and mouse game of AI solutions vendors vs. copyright holders.    This is much like the cat and mouse which is the tech vendors seeking to create generative AI solutions which create near human like content versus the detection tools seeking to detect where AI tools have been used.   Another challenge might be the malicious use of poisoned images to disrupt AI solutions such as the feeding of poisoned images into a facial recognition or image recognition solution in order to disrupt the operation of the system.

I also think it is worth stepping back and looking at us as humans and how poisoning might work on human intelligence rather than artificial intelligence.   One look at social media, one look at propaganda and at the Cambridge Analytica scandal shows us that poisoning of intelligences, such as human intelligence, isn’t something new;  I would suggest fake news is a type of intelligence poising albeit possibly at a societal level.    Poisoning has been around for a while and I am not sure we have a solution.   So maybe rather than looking at how we deal with or positively use the poisoning of artificial intelligence we need to go broader and consider poisoning of intelligence in general, including human and artificial intelligence?  

References

This new data poisoning tool lets artists fight back against generative AI, Melissa Heikkilä (2023), Technology Review, Downloaded 07/11/2023

Berlin artist uses 99 phones to trick Google into traffic jam alert, Alex Hern (2020), The Guardian, Downloaded 07/11/2023

Who wants a child to fail?

FAIL: First attempt in learning.    This for me has always been a great concept, that we often learn the most when things go wrong, however I am increasingly conscious that maybe the world we now live in is becoming increasingly risk averse, meaning that fails are not seen as opportunities to learn, but also that we are actually reducing the number of opportunities for students to learn from difficulties, challenges and even failure.

But why would we want a child to fail?

I suppose this is the key question, who would anyone want a child to fail?    I think this almost goes to highlight one of the key challenges in that a fail is seen as a negative conclusion and something we don’t want children to suffer.   But what if a fail isn’t a conclusion but is a step within a larger journey?    If our fails aren’t terminal or final but are more a road bump along the way, a change to re-channel efforts, to change paths or approaches or to simply learn from error, maybe there isn’t an issue with a child failing.

Desirable difficulty

So, if failing isn’t negative, might it be positive?   The concept of desirable difficulty refers to the positive benefits of being challenged rather than finding things easy.  Surely something not working or going as we intended, a fail, is definitely a challenge and therefore could represent a desirable difficulty if an eventual positive outcome results.   From a fail we have the opportunity to review our practice and identify how we might change to overcome this road bump, and in doing so we learn plus may also grow more resilient.  That clearly sounds like a desirable outcome, albeit I will also acknowledge it may not be easy, but I suppose the term “desirable difficulty” already says this.

Risk aversion

The challenge with all this is that, I feel, as a society we are becoming more risk averse.   We look at GCSE pass rates and want more students to pass each year, with the pass rates being in the high 90%s.    So, this meets our need for all students, or at least most, to achieve, but does it therefore rob students of the opportunities to experience and learn from failure.    As teachers we add scaffolding, we differentiate, we provide additional support where needed, and much more to make students succeed, but again are we depriving students of the benefits which result from where things go wrong?    In relation to AI in education we worry about AI errors, about bias, etc, where I don’t think we can get rid of these things;  Shouldn’t we embrace the technologies, teach students to be critical and accept that sometimes there will be a fail, but that students will then learn from this?

Monitoring and supervision

And looking more broadly we now monitor our children more than ever before, wanting to know their every move and making sure they have a mobile phone on them so they can be easily contactable.  We take them to football games and to other events, often being the ones which arrange the events, where once upon a time kids sorted their own entertainment, returning only once the street lights came on.   I look at my own childhood and the experiences I had when out with friends, sometimes just playing football or having fun, and sometimes maybe up to things my parents may not have approved of.  But in all of this I learned from my experiences, I made mistakes and picked myself up and moved on eventually better for it.

Compliance

And then there’s compliance and the world of health and safety among other areas.    We increasingly mandate things or require checks to be carried out, meaning activities we once did now take more time and effort due to the need to deal with compliance requirements.   As we add all this extra work and effort, the risk assessments, checks and balances, it makes us less likely to try new things and to experiment.   The potential gains of a project, of a new technology for use in the classroom, or many other things may not have changed, but the overhead in terms of checks and balances is now greater than it used to be so this means the perceived differential between the gain and the effort has reduced.   This increases the likelihood we will simply evaluate the technology, project or other activity, coming to the conclusion that the benefit is not sufficient to outweigh the efforts needed, and therefore the status quo remains.   

Conclusion

I came across a quote recently:  “life begins at the edge of your comfort zone”.  The challenge however is that we increasingly don’t want to allow students to experience the edge of their comfort zone for fear of fails or discomfort.   So what kind of life, and what kind of learning will result?  

TEISS 2024, Resilience, Recovery and Response

I try and take myself out of the educational bubble at least once per year.   This has been a conscious decision for a number of years as I realised the importance of diversity and therefore the limitations of only looking at IT and at cyber, data protection, etc from the stand point of people in similar educational contexts.    As such the TEISS event is one of those events I try to attend to broaden my experiences and get the views and thoughts of those who exist beyond the educational context of schools and colleges.  

This years TEISS event, where these events focus on cyber security and cyber resilience, had some predictable topics of discussion.  These obviously included Artificial Intelligence and also third party or supply chain risks.    So what were my big take aways from the event?

The cyber context

I am reasonably well aware of the cyber context and the risks which impact organisations in general including schools however the TEISS event presented a couple of key facts which I think are interesting.   That there is a cyber attack every 29 seconds in 2023 says it all, with this only likely to grow once the 2024 figures have been calculated.    This highlights the need for all organisations, including all schools and colleges to consider cyber risks and their defensive and recovery methods.    There is no excuse for having not done so.

Behaviourism

A number of presenters, and a number of those I had conversations with during the course of the conference highlighted the need to consider human behaviour as part of cyber thinking.    A cyber awareness programme isn’t so much about the programme but about bringing about behavioural change, so although having an annual training or other training programme might meet compliance requirements, does it bring about the behavioural change we seek and how do we know that this is the case.    It is about encouraging people to report issues and reinforcing such reports by making users aware of the impact where they do report concerns such as a phishing email.   If we can reinforce this view of reporting having an impact, rather than just being another thing staff are “asked” to do, then we might manage to build the cyber culture we want in organisations.   In discussion with one event attendee they raised a solution which would automatically remove phishing emails from mailboxes once it had been reported, and would then let the reporting user know as to their positive impact.   This seems like a great tool but apparently what had been a cheap tool was bought up by a bigger company and now forms a part of their free valued added tools but to a bigger more expensive product which needs to be purchased.  For schools this brings us back to limited budgets which means that key tooling for cyber security continues to be outside the budgets of those in education.

Its about people

The old Richard Branson quote in relation to looking after your staff as they will look after your customer was raised, albeit with a cyber bent, that you should look after your cyber security staff and they will look after your security rather than focussing on security.   I have to strongly agree with this and also to strongly agree with the need to look after those staff involved from an IT point of view in cyber incident response . The stress levels are high following the onset of an incident and someone needs to make sure that those leading the technical response stop and eat, sleep and take time out.    One interesting discussion which was raised however was how the CISO might do this for their team but who might do this for the CISO.    If the board and senior leaders push for updates and things to be “fixed”, while the CISO supports the team of people doing this work, who looks after the CISO?   Now in my team I feel lucky in that I feel my team would be quick to question me and challenge me to take the necessary time if needed.   This then goes to organisational culture and the culture to question at all levels.  I feel lucky to feel this would happen in my team, although I hope I never have cause to test this in a real incident, as we can only test these things in a real life situation;   Desktop exercises are all well and good but they pale when compared to the stress and challenges of a real incident.

Incomplete information and its inevitable

The inevitable nature of cyber risk is something I have talked about for some time.   You can do all you want in terms of your defences but the defenders need to get it right all of the time, while the attackers need only get it right, or get lucky once, so the probability lies with the attackers.    If we take that defence can never be 100% and therefore attackers always have a chance and will be trying from now unto an organisation ceases to exist, plus that no organisation seeks to not exist, then probability states with relative certainty that an incident will happen, just not when.      And when it happens we will see only bits of the picture initially with increasing amounts of the picture as to the impact of the incident, the ingress route, etc, appearing as time progresses, yet the expectation will be to communicate quickly as to an incident.   In relation t o comms the key message seemed to be that the worst thing to do is to state something which is later proved to be untrue, so this means it is all about saying little.     Another point which came across was related to the cadence of information, in that although we may seek to say little, we should seek to be regular in our communications even if this means saying that investigations are ongoing and that at this stage we know nothing more.   

Cyber and AI…..Or not

Within a couple of presentations the issue of language was raised.   The issue of AI being the current buzz word and being used both in terms of vendors singing about their products, but also in terms of threats and AI based threats, was mentioned.    Maybe AI has become a bit of a buzz word which needs to be included in product pitches, in conferences, etc, and maybe this doesn’t match the reality.   Another presenter raised how we use the term cyber.   Cyber bullying, cyber threats, cyber security, etc.   But isn’t it just bullying, a threat or security, albeit enabled by technology?    And does the use of the cyber word push us to think its an IT issue, an issue for IT companies and vendors rather than something which is the responsibility of a wider organisation, a school or a school community.   Maybe we need to reduce our use of the word cyber and embrace the wider links of technology enabled attacks as a subset of existing issues rather than as something unique and distinct.

Conclusion

I enjoy stepping outside of the education bubble and hearing about what cyber security looks like to those in the enterprise world where they generally have far greater resources.   It is heartening to hear that they suffer from the same problems and have the same answer, despite or in spite of their significantly greater resources.    This continues to highlight for me that “not enough money” or “not enough staff” isn’t the answer as we need to be pragmatic about cyber.   We could have  infinite staff and budget and we would still face challenges.   It continues to be about doing what we reasonably can, and preparing for the worst.   It also continues to be about getting this message across to trustee and governors, that no matter what we do the risk will continue to exist plus also that most schools or colleges which have suffered an incident have moved past it and survived.  In education with students we talk about FAIL as first attempt in learning, and maybe that’s what a cyber incident is? That said, its not a learning exercise I would care to undertake!

Focus and distraction

I recently read Stolen Focus by Johann Hari which looks at the perception of how we are increasingly less able to focus and hold our attention on a particular task or activity, with this particularly impacting our children and young adults.   Now one of the predominant views generally held in this area is that this is the result of technology and in particular the smartphone and social media however Hari goes to point to a few other things which could impact on our increasing inability to focus.

Environment

We live in a more polluted world plus increasingly are the subject of environmental changes resulting from global warming.     Now in some ways we are making progress with lead no longer included in petrol and the move to electric vehicle, plus the reduction of smoking in the UK however smog is still an issue in some cities.   There are also chemicals used in the production of modern goods or as by products of modern processes which all end up in the environment and eventually in our bodies.   How do we focus when our bodies are subject to pollutants over a prolonged period?   The answer according to Hari is simply that we cannot focus as well as we might have been able to in the past.

Reduced free play

Freedom to explore, to play and to have fun and make mistakes is a key part of the human learning process.   We have evolved as a species over millions of years using this approach however more recently, we have reduced the opportunities for freedom and exploration.    We increasingly supervise or even track our children to the extent that they don’t develop the social and resilience skills they may have once developed through play.    I feel we do this for the reasons of seeking to keep children safe based on a perception of a more dangerous world, but this is a perception rather than a reality, resulting from the ease with which we receive news of when things go wrong and our human tendency to overweigh the importance of what comes easily to mind.    As such we restrict our children from playing unobserved and freely.     And the reality in terms of safety is that we are likely safer than we have ever been before.  Hari makes the point of the likelihood of a child kidnapping, something we worry about and which drives our need to supervise and limit children’s freedom, being less than the likelihood of being hit by lightning.  I don’t think we keep children indoors and monitored to protect them from a lightning strike!

Food

Another issue impacting on our ability to focus is our changing diet.  Gone, largely, are the healthy home cooked meals involving fresh ingredients, which might be served around the family dinner table complete with family discussion.   The modern diet increasingly involves microwave or other convenience foods, foods loaded with preservatives and other additives, or high in salt or sugar content, often consumed on the go or while distracted by TV or social media content.   These aren’t the ideal ingredients to develop our ability to focus and in fact negatively impact on this human capability.

Sleep

Sleep and in fact the reducing amount of time spent engaged in quality sleep is another issue which Hari identified.    Now some of this is certainly the result of technology and our on-demand access to TV and movies, plus to addictive social media apps which encourage doom scrolling, however I also would suggest part of it relates to the increasingly fast pace of life and the need to squeeze every second of every day for the maximum we can get out of it.  This means we might get less than eight hours of sleep, maybe even only five or six hours, or less.This points to the increasing focus on the need to be more efficient, to be faster, to do more and to focus on growth and improvement.

Focus on growth

And this is the one which I think is a driver for some of the other issues, particularly the environment and our change in diet, the focus on growth, on doing better and doing more.    The world focusses on growth, so the world gets more frantic, faster, busier so we have less time to do tasks and need to move on quickly;  This builds a habit.   We also have the economic focus meaning tech vendors prioritise profit over societal good, in the name of growth, being a more profitable or bigger company this year when compared to last year.   This all drives a focus on doing whatever we can do to drive growth even where it isn’t positive for society as a whole.   Now I was always a fan of the educational concept of continual professional development however on reflection here my worry is that this is unsustainable, we cannot constantly develop and get better when faced with an infinite timescale.   In fact this driver, this need for growth, may make things worse and mean we are less focussed on identifying what really matters and what we need to do to achieve this.

Conclusion

I really enjoyed Hari’s book as it clearly established that technology and social media are part of the problem we now face in relation to our ability to concentrate and to focus, but they are not the whole, or the root, of the problem.   There are a number of factors feeding this problem including the environment we live in, our increasing risk adverse nature which leads to a reduction in opportunity for children to play and experiment, the poorer nature of the modern diet, reduced periods spent in quality sleep, and also the driving force focussed on growth above all else.  

This brings us neatly to where Hari begins in trying to solve the problem by cutting out technology and social media use for a period of time, or banning mobile phones as a school might decide to do.    This impacts on part of the problem but it doesn’t cover the other factors;   Maybe we need to have a broader discussion in schools in relation to focus and the things that might affect it? Maybe the problem is bigger than schools can address, and needs a more community or societal approach?

2023-24 in photos

2023-24 was such a busy year with so many great opportunities and so many great people to meet and share thoughts and ideas with.   From the outset and attendance at the ISMG Cyber Security Summit then presenting on AI at the VWV conference, it always looked like it was going to be a packed academic year but little did I know quite how packed it was going to be.   Little was I to know that the year would see me speaking in Amsterdam, Venice, Birmingham, London, Cardiff, Bristol, Leeds and a fair few other locations, and never mind the locations, it was the brilliant people that I had the pleasure to meet up with and talk all things technology and education with along the way that made it so very worthwhile.    The academic year also saw me become vice chair for the ISC Digital Advisory Group and become one of the founding members of the amazing Digital Futures Group (DFG). 

This collage of some of the photos is just the tip of the iceberg which was 2023-24 including some amazing memories from BETT, the Schools and Academies Show, The Edufuturists Uprising, EdTech Europe and also an attempted murder at FutureShots in Venice.     I can only hope that 2024-25 sees similar opportunities arise and further chances to share and collaborate with such great people, albeit hopefully without the attempted murder!

Bias (AI and human)

I was thinking a little bit further about bias in artificial intelligence systems. It is so easy to demonstrate and I, myself, have shown it in image generation tools with a simple request for a doctor or a nurse in a medical setting; The various bias in the resultant images are very clear indeed. But I think it is also important that we acknowledge the inherent bias in us as humans.   We get by despite this inherent bias yet seem to be unwilling to accept similar bias in AI solutions, so why is this the case?   What is the difference?

I thought a good place to start was to ask ChatGPT for its thoughts regarding the differences in bias between AI systems and humans.    It created a list of sources of bias for AI systems and a corresponding list for humans as below:

AI and Bias

  1. Training Data: AI systems learn from data, and if the training data is biased, the model can inherit and perpetuate those biases.
  2. Algorithmic design: Bias can also be introduced through the design of the algorithms themselves, including the choice of features, learning objectives, and optimization processes.
  3. Unintentional amplification: AI systems may unintentionally amplify and propagate existing biases present in the training data, leading to unfair or discriminatory outcomes.
  4. Lack of consciousness: AI systems lack consciousness and awareness. They don’t possess intentions or beliefs, so any bias they exhibit is a result of patterns learned from data without an understanding of the social context.
  5. Automated decision making: Bias in AI can lead to automated decision-making that affects individuals in various domains, such as hiring, lending, and criminal justice.

Reading the above, items 1 and 3 feel like a repetition, while item 4 begs the question of what is consciousness and how might we be able to evidence it in a future Artificial General Intelligence (AGI) solution?    This is a question I will leave for now or will leave for brighter people than me.  Item 5 is less about the source of bias and more about the impact it can have on humans.

Human Bias

In terms of humans ChatGPT listed the below:

  1. Human bias is often rooted in subjective beliefs, attitudes, and cultural influences. Humans can be consciously biased based on personal experiences, opinions, or societal conditioning.
  2. Humans can exhibit bias with intent, driven by personal beliefs, prejudices, or discriminatory motivations. This may not be the case for AI, which lacks intentionality.
  3. Unlike AI systems that learn from data in a structured manner, humans can adapt, unlearn biases, and consciously strive to be more objective through education and self-awareness.
  4. Humans possess a deep understanding of social, cultural, and contextual nuances, allowing for more sophisticated decision-making that takes into account the complexity of real-world situations. AI systems, on the other hand, may struggle with nuanced or context-dependent scenarios.

Considering A and subjective beliefs, attitudes and cultural difference, arent these the training data we as humans are provided, which shapes our neural pathways and our actions?   This is your upbringing, parenting, friends, local and national culture and values, etc.   We are exposed to this experiential training data throughout our lives, where an AI can be provided similar training data in a far shorter period of time.     Item B then comes from A in the same way as an AI’s bias might come from its training data or algorithmic design.    And I note the design of human beings, as influenced and evolved over time, has resulted in some design features which are sub-optimal in the modern world.  Take for example the fight or flight response kicking in during a heated discussion;   In the past all the relevant hormones released by fight and flight would be used up in the resultant fight or in running away from the teeth and claws of a predator, whereas in the boardroom these hormones have nowhere to go.   Does the boardroom really merit an increase in heartbeat and respiration?  And that’s before I dip into the availability bias, halo effect and a number of heuristic shortcuts we subconsciously use.

Items C and D, in my opinion, provide an overly positive view of us humans and our ability to unlearn bias and show a “deep” understanding.    Yes this may be possible however it isnt easy as humans may be unaware of their bias or bias might play into their perception of their understanding;   Take for example the confirmation bias where we might simply pick the facts or information which aligns which our view, discarding or undervaluing other counter facts or information.

It was at this point I considered AI and Humans and found myself noting the plural humans;  Maybe this is the key.    Humans work together where an AI solution is a single entity and maybe this is where bias diverges in its impact between humans and AI.    If we can gather a diverse group of human individuals this diversity can actively work towards identifying and removing bias.   An AI solution, as a single entity it doesn’t benefit from access to others, it simply takes the prompt and kicks out a response. 

But maybe we could look to multiple AI solutions working together?  Maybe it is a number of AI’s working together, working alongside humans?   I have frequently talked about IA, and AI as an Intelligent assistant, and maybe this is where the answer lies in an AI, with its bias, and a human, with its bias, working together and hopefully cancelled out each other’s bias?

Conclusion

I think its important that anyone seeking to use generative AI is aware of the inherent bias that may exist within such tools.   That said, I think the narrative on AI bias is rather shallow and limited, focusing on pointing out the shortcomings of AI in relation to bias, without considering the bias which exists in ourselves as humans.     I think we need to get more nuanced in our discussions here and look towards how we might address bias in general, whether it be AI or human related.