Is Gen AI Dangerous?

I recently saw a webinar being advertised with “Is GenAI dangerous” as the title.   An attention-grabber headline however I don’t think the question is particularly fair.   Is a hammer dangerous?   In the hands of a criminal, I would say it is, plus also in the hands of an amateur DIY’er it might also be dangerous, to the person wielding it but also to others through the things the amateur might build or install.     Are humans dangerous, or is air dangerous?   Again, with questions quite so broad the answer will almost always be “yes” but qualified with “in certain circumstances or in the hands of certain people”.    This got me wondering about the dangers of generative AI and some hopefully better questions we might seek to ask in relation to generative AI use in schools.

Bias

The danger of bias in generative AI solutions is clearly documented, and I have evidenced it myself in simple demonstrations, however, we have also more recently seen the challenges in relation to where companies might seek to manage bias, where this results in equally unwanted outputs.   Maybe we need to accept bias in AI in much the same way that we accept some level of unconscious bias in human beings.    If this is the case then I think the questions we need to ask ourselves are:

  1. How do we build awareness of bias both in AI and in human decision-making and creation?
  2. How do we seek to address bias?   And in generative AI solutions, I think the key here is simply prompt engineering and avoiding broad or vague prompts, in favour of more specific and detailed prompts.

Inaccuracy

I don’t like the term “hallucinations”, which is the commonly used term where AI solutions return incorrect information, preferring to call it an error or inaccuracy.   And we know that humans are prone to mistakes, so this is yet another similarity between humans and AI solutions.   Again, if we accept that there will also be some errors in AI-based outputs, we find ourselves asking what I feel are better questions, such as:

  1. How do we build awareness of possible errors in AI content
  2. How do we build the necessary critical thinking and problem-solving skills to ensure students and teachers can question and check content being provided by AI solutions?

Plagiarism

The issue of students using AI-generated content and submitting it as their own is often discussed in education circles however I note there are lots of benefits in students using AI solutions, particularly for students who experience language or learning barriers.    I also note a recent survey which suggested lots of students are using generative AI solutions anyway, independent of anything their school may or may not have said.    So again, if we accept that some use of AI will occur and that for some this might represent dishonest practice, but for many it will be using AI to level the playfield, what questions could we ask:

  1. How do we build awareness in students and staff as to what is acceptable and what is not acceptable in using AI solutions?
  2. How do we explore or record how students have used AI in their work so we can assess their approach to problems and their thinking processes?

Over-reliance

There is also the concern that, due to the existence of generative AI solutions, we may start to use them to frequently and become over-reliant on them, weakening our ability to create or do tasks without the aid of generative AI.   For me, this is like the old calculator argument in that we need to be able to do basic maths even though calculators are available everywhere.    I can see the need for some basic fundamental learning but with generative AI being so widely available shouldn’t we seek to maximise the benefits which it provides?  So again, what are the questions we may need to ask:

  1. How do we build awareness of the risk of over-reliance?
  2. How do we ensure we maximise the benefit of AI solutions while retaining the benefits of our own human thinking, human emotion, etc?   It’s about seeking to find a balance.

Conclusion

In considering better questions to ask I think the first question is always one about building awareness so maybe the “is GenAI dangerous” webinar may be useful if it seeks to build relevant awareness as to the risks.  We can’t spot a problem if we are not aware of the potential for such a problem to exist. The challenge though is the questions we ask post-awareness, the questions we ask which try to drive us forward such as how we might deal with bias where we identify it, how we might ensure people are critical and questioning such that they sport errors, how we evidence student thinking and processes in using AI and how we maximise both human and AI benefits.  

In considering generative AI I think there is some irony here in that my view is that we need to ask better questions than “Is GenAI dangerous”.    In seeking to use generative AI and to realise its potential in schools and colleges, prompt engineering, which is basically asking the right questions is key so maybe in seeking to assess the benefits and risks of GenAI we need to start by asking better questions.

OABMG Conference

I was lucky enough to be invited to speak at the Oxfordshire Academies Business Managers Group (OABMG) annual conference earlier in the week where I was speaking on AI in education and the possible impact and implications on school business managers.    It was a lovely event and I really enjoyed Sarah Furness the keynote speaker, however, sadly I had to leave following my session in order to catch a train, one of a number of trains needed to get me to and from the event.

Be brave

Sarah was both insightful and entertaining and to be honest, I could likely write a whole blog post just on the stories she shared however let me just summarise my key takeaways from her presentation.    Her key message, which resonated for me, was the need to be brave, which aligns with the values of my school, and also is so very important where we have technology advancing at such a pace but with regulation lagging so far behind.   We have no choice but to be brave especially given both students and staff are already experimenting with the use of AI.  We need to be brave in engaging, we need to be brave in experimenting and we need to be brave in accepting where things don’t go quite as they planned, but learning from these experiences.   The need for sharing, asking difficult questions and accepting challenges also aligned with my thinking, and again looking to AI in education, if we are to find our way with AI in schools I think this all rings very true indeed.  We need to be sharing our thoughts, and both challenging and accepting challenges from others, if we are to move forward.    Sarah’s talk was about leadership, using her context as a military leader and pilot;  maybe this will be key in the use of AI in schools, the need for effective, brave leaders who value and encourage diversity, sharing and challenge.

AI in education

Going into my presentation my key aim was to discuss AI in education and some possible uses for school business leaders.   I don’t have all of the answers, and to be honest, I don’t feel anyone has all the answers when it comes to AI and education, as AI is advancing at a rapid pace where education has changed little and is under both funding and also workload challenges.   That said, as I shared in my presentation, “The smartest person in the room, is the room”.   This David Weinberger quote is one of my favourites and is often used, as it highlights the need to discuss and share, in doing so we hopefully engage others to think about the issue, in this case, AI in schools, and collectively our thinking, our ideas and experience is enhanced.

Now you can view my presentation slides here if you are interested.   

At the end of my presentation, a couple of questions were raised which I would like to just pick up on, namely school engagement in AI in education, policy and also regulation.  

School Engagement in AI

I would like to draw attention to the article in the Express which highlighted that 54% of the students they surveyed were using AI in relation to their homework.  The key thing here is that students are using AI independently of whether schools have considered or talked about AI.  And it isn’t just students, you will also likely have staff, both teaching and support staff who are using AI.   The AI genie is out of the bottle and attempts to block it will inevitably be futile so, in my opinion, it is key that we engage with the use of AI, we talk with students and staff about AI, and that schools experiment and share.    But the fact AI is already here isn’t the only reason to use it in education.   We talk about the need to support individual students, differentiation, English as a second language and also SEND barriers to learning; all of these can be addressed to some extent through the use of AI tools.   Now I will note here that the use of AI tools may also increase some challenges, such as that of digital divides, but that was a key part of my presentation in talking about the risks and challenges first, as we need to use AI but only from a position of an awareness of risks and challenges.

Policies

Linked to the above, I think it is very important that schools put in place an AI policy if they haven’t already done so.   This allows the school to set out its guardrails in relation to the use of AI in the school.  Now there is a brilliant template for this, as created by Mark Anderson and Laura Knight, which can be found here.   Looking to the future I suspect the AI policy might be eventually absorbed into the IT acceptable use and/or academic integrity policies however for now, while AI use in schools is so new, I think having it as a standalone policy makes sense.

Regulation

There will need to be some form of regulation in relation to AI tools including their use in education however we have already seen that the technology is developing very fast while the regulation is lagging so far behind and is slow to adapt.   As such I think we should hope for and support some form of regulation to protect people, including our staff and students, and their data, but I don’t believe we can wait for this to happen.    AI is already here and students and staff are likely using it.  We can’t stop this, so I think we need to run with it, to try and shape the use and hopefully in doing so shape the regulation which follows.  This will mean making risk v. benefit decisions but seldom do we see anything which is beneficial without any risks.

Conclusion

The OABMG conference was enjoyable even though my visit was brief.   It was good to get to share some thoughts on AI in education and I hope those in attendance found the session useful.   My two key thoughts from the event are, the need to be brave, remembering we learn most from our mistakes, and the need in this ever-busy and complex world to share as collectively we are all better for it. I think these are two things I will try do more actively in future.

Digital Standards

I have been lucky enough to be able to see some of the draft versions of the Department for Educations (DfE) digital standards and to provide feedback.   In some ways, I am really keen on them but in other ways, there are aspects of them which I am not so happy about.    But that said, I think we should accept them as they are, and that they are a step forward.

For quite some years, there has been a void in relation to guidance on technology for schools with some schools navigating this void well, using networks such as the Association of Network Managers in Education (ANME) for help, however, other schools have not been so fortunate.   Technologies were bought but without plans for replacement, while other technologies were bought which didn’t meet future needs or would have worked well except for basic infrastructure which wasn’t there.    Years ago in Abu Dhabi, I discussed the need for a strong foundation in relation to EdTech before we got to teacher and student confidence and eventual embedding of technology in teaching and learning.   The DfE Standards are not the single solution but they at least provide some guidance and seek to fill this void.

The issue however is that schools and school contexts vary so much across England and across the wider UK so for any single set of standards to fit it needs to be very broad to the point which the resultant standards may have become less useful, especially for those schools without the relevant experience, skills and focus in relation to technology.    In trying to be more useful the standards are a bit more specific which means they don’t fit all contexts or all viewpoints in relation to how technology in schools should look.

If we accept the DfE standards as being imperfect in their inability to cover every school context and school eventuality, then we can make positive use of them.   As an imperfect instrument, we can take what we can from the standards while identifying where some things don’t fit.    One example is Cat 6A cabling;  I get it that we might want to put Cat 6A everywhere in a new building possibly, but in a refit where cabling runs make Cat 6A more difficult, why can’t Cat 5E be good enough?   Isn’t the biggest pinch point likely to be our internet bandwidth rather than having 10Gbps to desktops?   Maybe we can save money on cabling to spend more on bandwidth?  Maybe we might install some Cat 6A for Wireless Access Points (WAPs) but Cat5e elsewhere? 

I don’t think the standards can ever be perfect and black and white, but when is this world ever perfect or black and white, and more importantly when does this ever happen in the messy world of education?    Raising concerns about their imperfections or highlighting issues doesn’t move technology in schools forward.   So, if we accept the standards are imperfect, we might just be able to use them to do what they are meant for, which is to help and provide some guidance.   We might then be able to move technology in schools forward in more and more schools.

Some DigCit resources

Following on from my blog in relation to internet safety day I thought I would share some of the actual presentations I have used recently with students when discussing differing parts of internet safety. 

Now all the presentations are on the short side as they are designed to provoke thought and further follow up discussions with each presentation designed around a 5 to 10 minute assembly.   

I hope the presentations are useful or at least provide some ideas.   I am also open to any thoughts or ideas for other topics or areas which should be included in future presentations.

This session revolves around a tweet from a parody HRH Prince William account which was picked up by some UK radio broadcasters as fact where there was no evidence to support the figures quotes. Also the session looks at the possible impact from generative AI in relation to fake images or video.

This session is very much about asking the students if they feel comfortable with their technology use and then discussing ways that a balance might be achieved. It is also important to discuss how “screen time” is an overly simplistic measure and that all screen time is not equal.

This session focuses on binary arguments and how two opposite viewpoints can actually both be true or both be false. Some discussion of why people might seek to exploit binary arguments, social media algorithms and echo chambers is also included.

This session focuses on some examples of social engineering and how human habits can be used against us by malicious individuals. The key message is the increasing “sophistication” of attacks and therefore the need to be more vigilant and careful.

This session looks at data breached from sites and how this is leaked online. It may be worth getting the students to use HaveIBeenPwned if possible to see how many students already have data leaked on them online. The key closing point is that as we do more online we need to be aware of the resulting increasing risk.

The key feature of this session is the predictability of human choices in relation to passwords.You may wish to use the Michael McIntyre cyber video here or simply ask students about where the capital letter, number and symbol in their passwords might be.

This includes reference to an OSINT tool which allows you to identify the date and time of a photo based on the position of shadows within the photo;  This illustrates how even simple things might give away information about us.

It also contains a “pick a number” to illustrate how we can be easily influenced.   As the presenter you would stress the trackers slide and “14” to see if you can then encourage students to select 14 later in the presentation.If we can be that easily influenced then what might social media and other individuals be able to do with much much more data?

This session looks at public good vs. individual privacy and how these two issues may be at opposite ends of a continuum. The key is to show how we need to find a balance between these two extremes.

2024 + 2 months: A review

Ok so its now the end of February, that’s 1/6 of the year gone already and I am left wondering where the time has gone.   Now part of me thinks time is flying simply as we are all becoming busier, as we do more, try to be more efficient, try to do things better, while part of me also thinks its an age thing (but I am not keen to admit to the 2nd option).    I also need to acknowledge taking on quite a number of projects however I am finding keeping busy and the diverse nature of the projects to be fun and engaging and therefore keeping me in a more positive place than I was back in December;  That said I also need to ensure I do say “No” where appropriate.    So with 2 months from 12 gone I thought it might be useful to have a quick review against my pledges from the start of the year, so here goes…..

Doom Scrolling

I think my doom scrolling on my phone has reduced which is good but I still feel I reach for my phone a little too much at the moment.   It’s the old issue of my phone and its notifications providing me with the dopamine fix, leading to checking my phone more often due to increasing dependency on this fix.   It is now less social media I am looking at and more the likes of WhatsApp;  Am not sure if that makes it better screen time or not, however the need to constantly check my phone feels like a bad thing.   That said for the time period from December to the end of February my average daily screen time had dropped to just below 2hrs 40mins which is below the 3hr target I was looking to hit so this seems to be good progress.   I will however need to continue to review this especially around my habits   

Fitness

My target was 500km for the year so with 2 months gone I am happy enough with 95km completed.  Projecting this out, if I kept up the current rate would get me above the 500km target by the end of the year.    I have also seen my pace improve and now am generally sat around the 6min/km which is a pace I am happy with.   At this point I still haven’t engaged with any social running such as a parkrun so this is definitely something I need to look towards, plus I haven’t looked at runs beyond the 5km range so again this is something for me to look at.

Exploring

So far I haven’t managed any real exploring, aside from a trip up to Sheffield to work with The National College;   There wasn’t much exploring done during the couple of days up there due to a focus on filming a number of webinars.    So this is definitely something I need to look to address with Easter likely to be the first opportunity although I suspect Easter will be focused on family rather than exploring so it may be that any exploring has to be put off until the summer at the earliest.

Happiness

This is always a difficult question to deal with;  At work our perception is likely we are happier when on holiday, but if we quantify our happiness, the reality is we are often happier when busy, engaged and operating autonomously at work.    I think I am a little happier at the moment than I was in late December and I suspect a lot of this relates to being busy and engaged in different and diverse projects and events.   But maybe this leads to a question about how I balance this out in terms of my downtime?    I am reading a bit more and spending time with the velociraptor (she’s actually a German Sheppard but having eaten 3 dining room chairs among other things I think the velociraptor reference is more than fair) plus am running regularly and there are a few other things in their early stages which are making me feel a bit better, but I suspect I have a long way to go in finding the balance I want to achieve.  My current book, Lost Connections: Why you’re depressed and how to find hope, is an interesting read which is helping, although I note I am not reading it as I necessarily feel “depressed” but more because the topic and managing our lives in this busy and technology driven world is something that interests me.   We will see how I feel about the book once I am further into it.

Achievements/Contributions

I think as with 2023 things are already shaping up well with the upcoming ISC digital conference, my work on the ISBA’s Technology Survey, work with the Association of Network Managers in Education (ANME), an event looking at esports within schools, my work with the amazing Digital Futures Group ahead of EduTech Europe 2024 and a number of other things on the cards.   The key in all of these events are the brilliant people they allow me to work with, collaborate with and bounce ideas off of, plus the amazing educators which I get to network and share with.   Oh and I can’t not mention the really enjoyable Cyber Security webinar I did only the other day with Mark Anderson and Abid Patel; Is always great sharing with these two amazing individuals, but I clearly need to continue to work on Mark in relation to moving over to Scotland’s favourite soft drink rather than coffee. Abid is on the Irn-Bru Extra which is at least part way there!

Conclusion

I think, two months into 2024, I am still trying to find my new normal but I am happier with that than I was.   Maybe my new normal isn’t a fixed place but something more fluid, but where I am more accepting and happy of this fluidity?    It is still early days for 2024 but I am positive about what lies ahead and that’s a good place to start from.   

As I often find myself saying; onwards and upwards!   

Thinking about thinking (with AI)

Artificial intelligence (AI) is definitely the big talking point in educational circles at the moment.  You just need to look at the various conference programs and you will almost always find at least one session touching on AI or generative AI.   Now a lot of the discussion is focused on the possible benefits or the risks associated with AI and less so with the practical applications and need to experiment.   It was in thinking about the practical side of things, looking at tools like ChatGPT, Diffit, Gemini and Bing Image Creator among others, that I got thinking how AI might link to meta cognition.

Learning about learning

The idea of learning about learning, about meta cognition, has been around for quite some time.    The thinking being that if we educate students about how they learn and get them thinking about their learning preferences (eek, I almost said learning styles there!) then they can make informed decisions about their learning, and hopefully be better learners.   It seems to make sense.  But how does this link to AI and generative AI?

Learning with a learning assistant

I think the key issue here is how we see AI in terms of the learning experience.   Is it simply a tool to spark ideas?   Is it a tool to review content?   Is it a tool to surface information?   I would suggest it is all of these things and more, and in the case of generative AI can operate as an assistant to teachers or to students.   It is definitely more than a bit of technology or simply a tool as I suspect in its use its shapes our thinking and our processes, much as the simple tools like the hammer shaped human thinking and processes in the past.    We also need to consider that process when working with generative AI (GenAI) is often iterative or taking the form of a dialogue between the user and the genAI solution.  The user fields an initial prompt, to which the genAI responses.   The user then reviews the response against what they were hoping for, and if they are anything like me they realize that they haven’t been specific enough so therefore now provide further directives to the AI, which in turn returns a new, hopefully better response, and so the dialogue continues until an output which is satisfactory to the user is reached.     Now some of this dialogue can possibly be sped up through the use of various prompt frameworks such as the PREPARE framework shared by Dan Fitzpatrick, however even then it is still likely to be a dialogue with Dan also providing a framework for the review and iterative part of this process, his EDIT framework.

Meta AI supported cognition?

If we are looking to prepare students to work with generative AI as their always available assistant I think we also need to start exploring with students how best to use them.   Part of this is about looking at their learning and how their learning processes might be different with AI.   I suppose it’s a bit like if all your learning was done with a partner, with another human being.  Looking at the nature of the interaction, being very much a dialogue, makes this comparison feel all the more apt.   You would need to consider their approach, their emotions, social interaction, etc.   Now an AI doesn’t have emotions or the social side of things, or at least not yet or as we currently know these to exist, but it does have its own approach, its own biases, its own strengths and its own weaknesses.  So if we are using or encouraging students to use AI in learning, I think we need to work with student to unpick the processes rather than simply focusing on the tools.  If I am looking for ideas and to be creative, how best to I use AI?   If I am looking to review and improve my work, how best am I to use AI?    If I want to use AI for research, how best do I do this?    Is this where Meta AI supported cognition comes in?

Conclusion

In relation to technology use in education I have always said it isn’t about the technology but about what you are seeking to achieve.   With AI it might be using Gen AI to produce better coursework or to give you a starting point or some new ideas.    But if we think beyond the short term goals, isn’t it about being able to better use AI to suit our needs as they arise and as such do we then need to spend time with students unpicking the how of their use of Gen AI, understanding the processes, what works and what doesn’t in order to get better in working with our newly found AI assistant?

Might teaching about Meta AI supported cognition become a thing?

Do I really need a digital strategy?

I have recently been wondering about whether there is a need for a digital strategy.  I had always considered it important that every school had a digital strategy however more recently I have been wondering does the need for a digital strategy vary with a schools technology maturity?   If a school has technology which is reasonably well embedded is there still the same need for a digital strategy?

Getting started

At the early stages of technology use in a school I believe it is very important for a digital strategy to exist.   You are managing software and hardware requirements, underpinning infrastructure, maybe device end points as well as training and communication.   And it’s all new to you as you are only getting started.  There are a lot of moving pieces and a lot of interrelated decisions, plus there is the whole issue of change management and bringing people with you.    The need for a strategy is key in planning all of this but also in showing people where they are going and charting a path there.   Without the strategy you may have different people pulling in different directions.

Mature technology usage

When however, you get to mature technology usage and your systems and processes are more embedded the challenges are different.   Significant change is much more difficult as you are no longer moving from a blank slate.   You are likely from a situation where technology is being used and is likely having benefits, albeit there may be issues which are leading to considering change.   Consider moving from iPads to Windows Laptops for staff for example;  Your staff will be aware of the benefits of the iPads given they use them so availability bias plays its part here in that we know with certainty the benefits of our current setup however the benefits of the planned new solution are not as clear or definite, therefore we over weight our current setup.   In terms of the drawbacks of the iPads, we are aware of these too however given we are using iPads we equally already have workarounds or simply sub-optimal processes whereas for the new solutions we are only predicting the possible drawbacks, so again we come back in favour of our current setup.   And the same issue arises if you are looking at changing an established MIS system or various other bits of technology.   You also have the issue that different technologies and associated processes might be tightly integrated, meaning any new solution, which would be a different solution, would need to be able to be equally tightly integrated, where this is seldom guaranteed.

If not technology strategy, what?

Once you have a relatively mature technology usage I suspect rather than significant change, it is more like iterative or evolutionary change.    The technology is more transparent rather than having a more central focus.   There isnt the same need for a three-year strategic roadmap and the ongoing renewal of infrastructure should be a simple matter of operational process.   So, given we accept technology continues to evolve, if it isnt digital strategy, what is it?    In a recent chat with Ian Yorston he planted the idea of Digital Culture with me and I think that is exactly what we should be looking to develop, where technology is simply the way we do things around here, supporting the overall objectives and aims of the school.   And culture is something that changes more slowly based on the stories and the narratives told around school, so we need to be paying more attention to this.  So its about having the opportunities to constantly review the narratives around technology use in school, to assess the impact and value and iterate and evolve.    In my school this is our IT Management Group but also working parties such as our AI working party.   It is also feedback processes through SLT and through Heads of Department.   I don’t think we have it 100% nailed by any means but then again if things are constantly evolving maybe this is to be expected.    One of the things I want to do more of is look to the “how we measure” impact and value and the how we know things are working or not.   

Conclusion

Maybe a technology strategy is very important to get started, but maybe once technology is embedded it’s all about developing the culture and softer adjustments and changes around the edges.      

And maybe it easier to show change and impact in the beginning which makes it easier to demonstrate the impact or progress against the strategy.   Once technology is more embedded this isnt so easy to measure or assess which means we need to start looking deeper, and this is something I hope to look into over the coming months.

Connectedness and 11 years of blogging

I sit and write this in a hotel in Sheffield ahead of recording some webinars related to the DfE Digital Standards over the next 2 days.   Today isnt special for me due to what I am about to do, although I am very happy for the opportunity, but due to the fact 11 years ago, sat on the bed in my villa in Al Ain in the UAE I setup and published my first blog post.   And yes, there is quite a difference between a villa in the UAE and a hotel in Sheffield;  lets just say I am certainly not as warm as I used to be.

When I started my blog I was very much doing it for me.   It was an act of putting things down in writing which forced me to decompose my thinking which often led to me challenging my own perspective and views.    Part of the reason to start my blog was the fact I accepted that my memory was not as good and photographic as I thought it was and therefore writing things down, publishing them online made for a permeant record that I could compare over time.    I will admit, when I started,  I never saw myself still posting 11 years later and I don’t think I saw me finding the process as quite as valuable as I now find it.

And in writing for myself I have found that there are actually people out there reading my thoughts and at many points I thought no one read my musings;  But remember my musings were for me so this wasn’t an issue.    The recent BETT conference included a number of individuals, some I had met in person but a number I hadn’t, telling me they read my posts.   I was connected to these people in sharing, but possibly more importantly my posts built an opportunity to connect with these people in person;  I was originally going to put “real life” there but how is a connection made online any less “real” than a connection in person, although I would suggest in person has greater value in the non-verbal side of communications, empathy and emotional connections which are not as possible online.    When I was struggling from a personal point of view I found people reach out and offer support, with some being people I knew in person and others being people I knew only online.    I found myself helped by a network built from sharing my thoughts, although again those in-person relationships were that bit stronger than the online only ones.

When I consider online connectedness I have always considered it to be shallow and in some cases simply an illusion;   I can be online chatting via social media with lots of people but still feel lonely, something I have posted about in the past.   But equally the online connections can spring into real life connections that maybe, otherwise might never have occurred.   I know after BETT I came away happy and energized, against a backdrop of some personal challenges.   Some of this was due to connecting once more with in-person friends, some of it was due to new in-person connections but some of it was also due to online connections suddenly becoming in-person friends and colleagues.    So maybe online connections arent shallow;  Maybe this is too simplistic a categorisation.  Maybe if they remain online and that is our only and principle connection, they remain shallow, but if this is simply the seed from which the in person connection grows then maybe we are all the better for it.   Maybe there is a balance to be sought in relation to in-person and online connections, seeking to maximise the benefits of both types.

So 11 years of blogging;  Where has the time gone?   Funny enough I can answer that simply by reviewing my posts over the last 11 years.   So, to the future, I will keep blogging for now at least and see where things go from there.    For those reading this I just give thanks for giving my musings your valuable time and if I havent met you yet, then I look forward to hopefully meeting you in person at some point in the future;  EduTech Europe 2024, BETT 2025 maybe?

This post was written on Monday 12th Feb, 2024

Safer Internet Day 2024

I thought I would put a post together to coincide with safer internet day, the 6th Feb 2024.    Safer internet day represents an opportunity to stop and recognise the importance of online safety however it is also important to recognise that our understanding of digital risks isnt confined to a single day but is something we should be constantly considering.

I will be honest and say that I generally feel we do not do enough in relation to digital citizenship, which is the broader concept which encompasses online safety, in schools.   Yes, schools have safer internet day, they have content in their PSHE education programme plus in their KS1, 2 and 3 Computer Science programmes, and for those students choosing to study computing or IT subjects at A-Level or in vocational qualifications, but it is limited content and this is against a backdrop of increasing use of digital tools and increasing sharing of data.   We believe basic maths and basic literacy are requirements for all; I believe basic digital citizenship should also be a requirement and a subject in itself.

So, if it was a subject what would the topics be?

I already try and deliver sessions for students throughout the year in relation to number of digital citizenship topics which includes:

Fake News

I think this is a very important subject given the ease with which fake images and even fake audio and video can now be created through the use of Generative AI.    Recent cases with fake Taylor Swift videos and fake Joe Biden audio are a case in point. How might we tell the fake from the real, but also what about those individuals who say or do something inappropriate only to claim they didn’t, and that the footage or audio is fake?    How do we establish truth in world where we can no longer believe what we see or hear?

Big Data

We are constantly given away data, and more than we realise.  And it isnt just about the data we give away, but also about the data which might be inferred from what we give away.    Consider where you live, the car you drive and where you shop for example;   How might this information help to infer something about your wealth or earnings?    What does your weekly shop say about you and your family? And remember it doesn’t need to always be right, it just needs to generally be right more than its wrong to have value.    Then there are the organisations willing to pay for your data or to sell your data on.   Might we get to a point where, through data, some companies know more about us than we know about ourselves and at that point, what is the potential for us to be influenced or even controlled.

Binary arguments and echo chambers

The medium used to communicate has an impact on the message, with this being all the more apparent on social media where things go viral, with agreement, or viral with disagreement so very quickly.  The medium shapes our views through its algorithms, connecting stories with those likely to engage either in agreement or in disagreement, thereby enhancing divides and encouraging most discussions to descend into binary arguments.    As you engage with social media, it will try to feed you the info you want to hear, which therefore tends to reinforce the views you already have rather than providing alternative viewpoints.   So in consuming information and news from social media we need to be conscious of how social media works and therefore how it might shape the news it presents and eventually our viewpoints.

Balance – Public Good and Personal Privacy

Balance as a concept is something I believe strongly in.   For ever advantage there is a corresponding risk or draw back.   And in some many decisions we operate on a continuum rather than with polar opposites.  Take public good vs. personal privacy for example.    We want to be safe so expect the police and intelligence services to monitor in search of terrorists and other threats.   Yet, we also want our individual privacy so to be free from monitoring.    Can we have both?     The answer is no, we need to find some balance between a “reasonable” level of surveillance and monitoring balanced out against a “reasonable” level of individual privacy.    Taking the discussion of encryption, the challenge here is that weak encryption is weak for all, so monitoring anyone is difficult without putting all at risk.   Now there are solutions here such as monitoring at the device level where encrypted communications need to be decrypted to display, however this is difficult as it requires access to the device.    We basically have an imperfect situation, and sometimes in this complex world we need to live with imperfect.

Cyber Security

As we use more digital tools, share more data and generally use technology more and more we need to be more and more conscious of cyber risks and how to remain secure.   This is in the accounts we use, the data we share, the use of MFA, but also in the devices we own including updating our devices such as laptops and phones, but also the increasing number of IoT devices we have such as smart plugs and voice assistants.   We need to give some consideration to cyber security in all purchases, and in each system or service we seek to use.  It may even be necessary to accept that every piece of technology used represents increased risk, so the question then becomes is the gain from using the service sufficient to outweigh the risk?

Addiction and Being Human

How many times have you seen a major event such as a new years fireworks display with people all holding up their phones to film the event, so all experiencing the event through their smart phone screen?   Or have you been on a train or in a restaurant and seen countless people staring at their phones?   Is this the way we want to live and does this change our experience of life?  Yes it might give us a nice video of the event which we can then go back to in future but how often do we do this and if we didn’t record the event would we spend more time interacting with those around us, with this resulting in something more memorable?    What does being human look like in this technology enabled, technology curated and technology filtered world?

Conclusion

The above are just some of the areas I discuss with students and I note I don’t have the answers as I spend a little too much time on digital devices, I share more data than I likely need to, etc.  What I do hope to do however is build awareness and start a discussion as this is I believe what matters.    We need to be thinking about the challenges and risks and ensure our students, our young people, are aware of them and are making educated decisions.

I hope everyone has an enjoyable safer internet day;  Stay safe online!

Digital Divides ?

The BETT Show got me once again thinking about the digital divides, and I am very careful to use the plural here as I believe there are many digital divides currently acting on our students.   Now I have been challenged in the past over the existence of a digital divide (note the singular here which I think is important) with evidence of widespread access to devices being one of the key points of challenge.  One piece of research, for example, suggested as many as 98% of UK 16-17yr olds owned a smartphone.    Based on this data almost all children have access to both a device and also internet access suggesting ubiquitous access and no digital divide however, although this may tick off the divide related to access to a device and also access to the internet, what about the other divides?

Its not the device that matters!

When looking at school technology strategy we have long identified that a strategy to simply put a particular device in staff and student hands doesn’t work.    Its not about having the device, although this is an important foundation, its about considering what it will be used for, how its use will be included in teaching and learning, what support is available in terms of technical support but also subject related technology use support, the overall culture of the school in relation to technology use, the confidence of teachers in using technology, etc.    In terms of students and the digital divide, there are similar issues.

Have it, but don’t use it here

One obvious divide for students relates to school technology strategy.   In some schools technology has a key part to play, so 1:1 devices might be available, class sets, or BYOD might be supported, but generally it is a case of technology is encouraged.    Other schools may have far more limited technology and may ban the use of mobile devices;   All of a sudden our ubiquitous access to devices and the internet isnt nearly as ubiquitous if students arent allowed to use their devices and no devices are provided while in schools.   Those students who are encouraged to use technology in school, across their lessons, benefit from lots of learning opportunities in relation to technology, while for those without, these opportunities don’t exist.

Supportive networks

For some students, use in school provides them teaching and support in relation to technology and its use through advice from teachers, support staff such as IT staff in schools plus also from their peers who like them are using technology within the school.    This support helps, and ongoing use also helps as it allows students to build confidence in the use of technology, which then supports experimentation with new technology or new functionality within existing platforms.    But this support isnt uniformly available with some students receiving far more than others.   And the issue of support extends beyond the walls of the school to home, where some students will benefit from engaged parents willing to discuss technology use, the benefits and risks, where for other students they may be left to their own devices, which may devolve towards doom scrolling social media apps.

Digital Citizenship

And in some schools there will be robust discussion of social media app and the broader issue of digital citizenship. Students will therefore be more aware of the risks and challenges associated with social media including issues around big data, influence, bias and echo chambers, etc. This will be in addition to the meagre amount of discussion which may be supported in PSHE lessons or within the computing science curriculum which might be all some students receive. Plus, where there is robust discussion, there is a greater chance for students to ask questions or seek support.  

Maybe you need more than a phone

We also need to recognise that the smart phone isnt always the best tool and sometimes we need a bigger screen, a keyboard and a mouse.   So, although ubiquitous access to a smartphone is a good start it isnt the solution.    A study looking at device access for homeschooled students in the UK found that slightly more than half of students had to share a device with others in the household for example.    Again, we have some students who benefit from their own device which they can personalise, use and build confidence with, and other students who do not have this benefit.

And then there’s the new tech; GenAI

So, from the above I hope I have highlighted some of the divides impacting on students and this is now further compounded by new technology such as GenAI.   In some schools this is being discussed and students are being encouraged to learn about and use GenAI solutions, but in other schools GenAI is out of bounds and banned, or the students simply don’t have access to the basic technology to properly explore GenAI.    For those students learning about AI, they are likely to be more confident and familiar with GenAI solutions they encounter as they exit school and either continue their studies or enter the world of work, whereas those who have been deprived of the opportunity will be presented with a steeper learning curve.

Conclusion

For me there are definite digital divides and I feel current development around GenAI is only going to widen these divides.    Access to a device and internet, the ubiquitous smartphone, is a good start but it is akin to giving devices to teachers with no professional development or support.   They might get some use of the devices but never what is truly possible.  And looking at students and the smartphone I suspect what they might get out of their devices will be a lot of YouTube and TikTok content rather than something more meaningful.   

We very much need to seek to address the digital divides and for me the place we need to start is with the basic building blocks in terms of infrastructure and devices in schools.   Only once this is reasonably consistent across similar types of schools can we then move on to tackle other digital divides.

References

UK: children owning mobile phones by age 2023 | Statista

Over half of home-schooled children in the UK have only shared access to computers – Institute for Social and Economic Research (ISER) (essex.ac.uk)