AI and AI and AI

Is AI a danger to education is a question I have recently explored, hopefully trying to present a balanced viewpoint.   This question however has an issue in that it asks about AI, as if AI was a simple item such as a hammer or a screwdriver.   The term AI covers a broad range of solutions, and as soon as you look at the breadth of solutions the question becomes difficult to answer, and in need of more explanation and context.  In effect, the question is akin to asking if vehicles are bad for the environment, without defining vehicles;  Is a bicycle for example bad for the environment?

[Narrow] AI

Although some may associate recent discussions of AI with ChatGPT and Bard, AI solutions have been around for a while, with most of us using some of these solutions regularly.    As I write this my word processor highlights spelling and grammar errors, as well as making suggestions for corrections.   The other day when using Amazon, I browsed through the list of “recommended for you” items which the platform had identified for me based on browsing and previous purchases.   I have made use of Google this morning to search for some content plus have used Google maps to identify the likely travel time for an event I am attending in the week ahead.   Also, when I sat down to use my computer this morning, I made use of biometrics in order to sign in plus used functionality in MS Teams in order to blur my background during a couple of Teams calls.   These are all examples of AI.   Are we worried about these uses?   No, not really, as we have been using them for a while now and they are part of normal life.   I do however note, that as with most things there are some risks and drawbacks but I will leave that for a possible future post.

The examples I give above are all very narrow focus AI solutions.   The AI has been designed for a very specific purpose within a very narrow domain area such as correcting spelling and grammar or identifying probable travel time or identifying the subject/individual on a Teams call then blurring everything which isnt the subject.   The benefits are therefore narrow to the specific purpose of the AI as are the drawbacks and risks.  But it is still AI.

[Generative] AI

Large language model development equally isnt new.  We might consider the ELIZA chatbot as the earliest example dating back to 1966, or if not, Watson dating to 2011.  Either way Large Language Models have been around in one way or another for some time, however ChatGPT, in my view, was a major step forward both in its capabilities but also in being freely available for use.   The key difference between narrow AI and Generative AI is in the fact Generative AI can be used for more general purposes.   You could use ChatGPT to produce a summary of a piece of text, to translate a piece of text, to create some webpage HTML, to generative a marketing campaign and many other purposes across different domains, with the only common factor being it produces text output from text-based prompts.   DALL-E and Midjourney do the same, taking text prompts, but producing images with similar solutions available for audio, video, programming code and much more.  

Generative AI as it is now, however doesn’t understand the outputs it produces.   It doesn’t understand the context of what it produces and it, when it doesn’t know the answer, may simply make it up or present incorrect information.   It has its drawbacks and it is still relatively narrow in terms of its limitations to taking text based prompts and responding based on the data it has been trained with.   It may be considered more “intelligent” than the narrow-focus AI solutions mentioned above but it is way short of human level intelligence, although it will outperform human intelligence in some areas.   It is more akin to dog like intelligence in its limited ability to preform simple repeated actions on request, taking a prompt, wading through the materials its been trained on, and providing an output, be this text, an image, a video, code, etc.   

A [General] I

So far, we have looked at AI as it exists now in narrow focussed AI and generative AI, however in the future we will likely have AI solutions which are closer to our human intelligence and can be used more generally across domains and purposes.    This conjures up images of Commander Data from Star Trek, R2-D2 from Star Wars, HAL from 2001 and the Terminator.   In each case the AI solutions are portrayed to be able to “think” to some extent, making their own decisions and controlling their own actions.    The imagery here alone highlights the perceived challenges in relation to Artificial General Intelligence (AGI) and the tendency to view it as good or potentially evil.   How far into the future we will need to look for AGI is unclear with some thinking the accelerating pace of AI means it is sooner than we would like, while others believe it is further into the future.    My sense is that AGI is still some time off as we don’t truly understand how our own human intelligence works and therefore, if we assume AI solutions are largely based on us as humans, then it is unlikely we can create an intelligence to match our own human, general, intelligence.    Others posit that as we create more complex AI solutions, these solutions will help in improving AI which would then allow it to surpass human capabilities and even create super intelligent AI solutions.   Cue the Terminator and Skynet.     Now again, I suspect when we get to the generation of AGI things will not be as simple as they seem, with all AGI’s not being equal.   I suspect the “general” may see some AGIs designed to operate generally within a given domain, such as health and medicine AGIs, or education AGIs, etc.       

Conclusion

Artificial Intelligence solutions can cover a wide range of solutions with my broad discussion of narrow AI, generative AI and AGI being only three broad categories where other areas exist.   It is therefore difficult to discuss AI in its totality certainly not with much certainty.   Maybe we need to be a little more careful in our discussions in defining the types of AI we are referring to, and this goes for my own writing as well where I have equally been discussing AI in its most general form.

Despite this, my viewpoint still remains the same, that AI solutions are here to stay, and as discussed earlier have actually been around for quite a while.    We need to look to accept this and seek to make the best from the situation, considering carefully how and when to use AI, including generative AI, as well as considering the risks and drawbacks.   As to AGI, and the eventual takeover of the world by our AI overlords, I suspect human intelligence will doom the world before this happens.  I also suspect AI development for the foreseeable future will see AI solutions continue to be narrower and short of the near human intelligence of AGI;  As such we definitely need to consider the implications, risks and dangers of using such AI solutions but we also need to consider the positive potential.

AI: A threat to the education status quo?

My original blog post on AI was meant to be a single post however the more I scribbled thoughts down the more I realised there was to consider.    And so with that, this is the fourth of my series of posts on AI.   Having looked at the issue of whether AI is a threat to education in post one, at some benefits of AI in post two, and then some of the risks or challenges around AI in post three, this post will continue to explore some of the ways in which AI might be considered a threat to the current formal education system as it exists across the world.

What are we assessing?

In the last post I started considering how AI challenges the current education system, looking at the fears regarding the use of AI based solutions, like ChatGPT, by students to “cheat”.   This concept of cheating is based on the current education system where students submit work to teachers, where the work is their own work to be used by the teachers to assess and confirm understanding.   So the use of AI to create work which the student presents as their own seems like cheating and dishonesty.  But what if the student only uses the AI as a starting point modifying and refining the content before submission;   Is this ok?    And what degree of refining is enough for the work to be considered as belonging to the student, and what degree is not enough and therefore represents cheating? When is AI a tool, fairly used by a student in proving their understanding and learning?

I think it is at this point we need to ask why we are asking students to complete coursework;  For me it is a way to check their understanding and learning of taught content.   It is one method but not the only one, although it is the method education has generally accepted as the current proxy for student understanding, whether it be GCSE coursework, A-Level or a Degree dissertation.    The uncomfortable truth is that this easy and scalable method of assessment isnt as appropriate in an age of AI.   I will however admit I am not sure what the alternative is, where such an alternative needs to be fair and also scalable to students the world over. When thinking of its scalability I always think, what if life was found on Mars and we have to scale our GCSE coursework and exams to encompass these new lifeforms; It would simply be a case of translating the requirements, sticking them on a rocket and sending them to Mars. As I said, the current setup is very scalable.

And then there is the question of, if my students can use the tools available to them, including AI, to reach an acceptable assessable outcome, is this not good enough?   If the assessments we create make it easy for a student to achieve without having any understanding of the topic or domain they are being assessed on, simply through the use of AI, then maybe we need to rethink the assessments we are setting students in the first place.

Social Contact

Social contact is another areas where there are various concerns around AI.    It may be that in using AI for our studies, our work and even through virtual friends, for companionship, we may see ourselves interacting with human beings less and less, where social contact in a key part of what it means to be human.   For education, if students find themselves learning through personalised AI, learning in their own time, what is the point in school?   And if there is no school, with students learning where and when they like, where will students learn social skills and the skills needed to live with and interact with other humans?    Will we be drawn to our screens and our devices?    Looking around at people on the train I sit on as I write this, I don’t feel we are that far from this scenario already.   So, what is the solution?   For me, in education we need to make sure and achieve a balance between technology and humanity.   If students are to do more learning via screens and personalised AI teachers, and where they may converse with their virtual AI friend, we also need to find opportunities for social interactions, for play, for fun, but also for arguments and debates, simply more opportunities for socialisation. And maybe this is the future for the schools and colleges, that these are the places for socialisation and developing social skills.

Conclusion

AI is here now and here to stay, and as a result of it we need to ask fundamental questions about education as it currently stands.   What are we trying to achieve?   Is the factory model of batches of students taught the same programme still appropriate?     How do we assess learning in a world of AI and actually what should we be assessing?

AI will keep progressing and if we don’t ask questions of our current educational system ourselves, AI will be threat the Times article suggested it will be, as AI will force the questions upon us.    And if education has changed little in over 100 years, I can only imagine how disruptive the sudden forced changes may be. But if we are pro-active it may be that AI is also an opportunity, an opportunity to challenge and reassess the current model of education to find something more suited for the years ahead, years which will invariably involve more and more AI solutions.

Dangers of AI in education

Am now onto my third post in my AI posts following the Times, “AI is clear and present danger to education” article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated with AI (see here) however now I would like to give some thought to the potential negatives.    Now I may not cover all the issues identified in the article however I will hope to address the key issues as I see them.

The need for guardrails around AI

One of the challenges with technology innovation is the speed with which it progresses.  This speed, driven by the wish of companies to innovate, is so quick that often the potential implications aren’t fully explored and considered.   Did we know about the potential for social media to be used to promote fake news or influence political viewpoints for example?   From a technology company point of view the resultant consequences may be seen as collateral damage in the bid to innovate and progress whereas others may see this as more a case of companies seeking profit at any cost.   One look at the current situation with social media shows how we can end up with negative consequences which we may wish we could reverse.   But sadly once the genie is out the bottle, it is difficult or near impossible to put back plus it does seem clear from social media that companies ability and will to police their own actions is limited.    We do however need to stop and remember the positives in social media, such as the ability to share information and news at a local level in real time, connectedness to friends and family irrespective of geographic limitations, leisure and entertainment value and a number of other benefits.

So, with a negative focus, the concern here in relation to the need for AI “guardrails” sounds reasonably well founded however who will provide these guardrails and if it is government for example, wont this simply result in tech companies moving to those countries with less guardrails in place. Companies are unlikely to want to slow down as a result of adhering to government guardrails where this may result in them ceding advantage to their competitors.    And in a connected world it is all the more difficult to apply local restrictions, especially as it is often so easy for end users to simply bypass such restrictions.    Also, if it is government, are government necessarily up to date, skilled, impartial, etc, to make the right decisions?    There is also the issue of the speed with which legislation and “guardrails” can be created, as the related political processes are slow especially when compared with the advancement of technology, so by the time any laws are near to having been passed the issues they seek to address may already have evolved into something new.  To be honest, the discussion of guardrails goes beyond education and is applicable to all sectors which AI will impact upon, with this likely to be most if not all sectors of business, public services, charities, etc.

Cheating

There has been lots of discussions of how students might make use of AI solutions to cheat, with risks to the validity of coursework being particularly notable.    There is clearly a threat here if we continue to rely on students submitting coursework which they have developed on their own over a period of time.   How do we know it is truly the students own work?    The only answer I can see for this is teacher professional judgement and questioning of their students but this approach isn’t scalable.    How can we ensure that teachers across different schools and countries question students in the same way, and make the same efforts to confirm the origin of student work?    Moderation and standardisation process used by exam boards to check teacher marking is consistent across schools won’t work here.    We will also need to wrestle with the question of what does it mean for submitted work to be the students “own” and “original” work.   Every year students submit assessments, and more and more gets written online, and now AI adds to the mix, and with this growing wealth of text, images, etc, the risk of copying, both purposely or accidentally continues to increase.   The recent course cases involving Ed Sheeran are, for me, an indication of this.     When writing and creating was limited to the few, plagiarism was easy to deal with, but in a world where creativity is “democratised” as Dan Fitzpatrick has suggested will occur through use of AI, things are not so simple.

Conclusion

The motives of tech companies for generating AI solutions may not always be in the best interests of the users.  They are after all seeking to make money, and in the iterate and improve model there will be unintended consequences.   Yet, the involvement of government to moderate and manage this innovation isn’t without its consequences, including where some governments own motives may be questionable.   

In looking at education, the scalable coursework assessment model has worked for a long period of time however AI now casts it into question, but was its adoption about being the right way to measure student learning and understanding, or simply the easiest method to do this reliably at scale?  

Maybe the key reason for AI being a threat is the fact that, if we accept it is unavoidable, it requires us to question and critique the approaches we have relied on for years, for decades and even for centuries.

The benefits of AI to education

This is the 2nd of a series of posts prompted by the Time article titled “AI is clear and present danger to education”.   The 1st part of the series of posts can be read here and focussed on some initial thoughts in relation to the headline.   In this post I would like to focus on some of the possible benefits that AI might bring to education the world over before getting to the the risks as mentioned in the Times article in subsequent posts.

Some benefits of AI

One benefit is the potential for AI to help with the teacher workload challenge through automating and assisting in some of the more routine tasks.   In my first post I identified the workload issue or as some would categorise it, crisis, as a challenge and threat to education in much the same way that AI is being categorised as a threat.   Having spent over 20 years working in education I have seen many things added to a teachers role and responsibilities but scarily few tasks or requirements ever removed.  Now AI wont remove things, but it should help make them easier.    Creating of lesson plans, course outlines and lesson resources, writing parental reports, dealing with emails and many other tasks can now be completed quicker through the use of AI.   Now I am being careful here in saying that such tasks will be done “quicker” rather than done by AI, as my view is that AI is a tool and that it is the professionalism of the teacher which will check and refine content produced by AI before its use.   Given the risks of bias within AI and incorrect information being presented the need for the human checking will remain for some time, however a human, with the aid of AI will be able to get things done quicker than without, either allowing more to be done or allowing more focus to be put on what matters, rather than more mundane tasks an AI can help with.   And in terms of “what matters” I would see this freeing up more time for teachers to focus on their students and the learning of these students.

 The potential for AI to engage more students the world over in high quality learning is also worthy of note.   I have long looked at the data teachers are requested to gather, which is often gather once, use once, and been concerned by the wealth of data and how little is actually done with it.     Most of the useful data gathered in relation to learning in classrooms is never actually recorded.  It is basically the day to day, minute to minute interactions of the teacher which then shapes how the teacher approaches their teaching and the learning.    But an online learning platform with AI can gather this data and more.  It can look at the delay time between a question and answer for each student.  It can look at the mouse movements, the time period of correct answers, versus wrong answers, it can look at the time of day, and all of this for every student using the platform.    Combined with appropriate AI it can direct the students to appropriate content to meet their needs, providing 1:1 advice and support, much in the way a teacher can.    AI can provide personalised 1:1 teaching and learning at a scale not currently possible.   Through AI based platforms students the world over can access personalised learning even where the education system in their home country may be lacking, although I note this relies of access to technology and those required to support technology.    It maybe that AI will draw focus on the digital divide and possibly widen it for those without access or without understanding of how AI might be used.    It may also be that AI will create educated individuals from countries and areas where conventional schooling has been lacking.   As I think about this Sugata Mitra’s “hole in the wall” experiment springs immediately to mind, albeit now with the AI power providing a personalised tutor to all those engaging with the technology.   I suspect with AI, Sugata’s experiment would have seen even more success in terms of student learning.

Conclusion

The issue of AI is not a binary issue of AI as a threat or saviour.    The idea of AI as a threat also has its issues in terms of popular media;  just think of the Terminator or HAL and you can see that perception may tend towards the negative, and that’s maybe a bit of an understatement.   The reality is that AI, like many other technology tools, will provide its benefits but also its risks and threats.   There will be those who use it carefully and responsibility,  those that use it carelessly and those that use it maliciously.   

But I can say the same about the humble hammer.

References:

Sugata Mitra’s Hole in the Wall Experiment (2017), Revise Sociology

AI ‘is clear and present danger to education’ (May 2023), The Times

AI is an opportunity for education

Reading the “AI is clear and present danger to education” article in The Times the other day conjured up images of Harrison Ford, political intrigue and the risk of the collapse of government.   Ok so I very much enjoyed the various Jack Ryan movies and particularly those starring Harrison Ford hence the imagery, however the articles focus was on concerns in relation to Artificial Intelligence (AI) and its potential impact on education, with the article citing head teachers as saying that AI “is the greatest threat to education”.  

The headline paints a nice simple picture however as I sat down to write this blog piece it was clear to me things aren’t that simple.   And as I wrote it became clear in my mind that this issue is complex indeed and that a single post wouldn’t allow me to do it any justice.    As such this therefore is the first of a series of posts discussing AI and the danger it may present, as well at the opportunities it may also present.

Is AI a threat to education?

Yes, but this is focussed purely on the risks and negative impact.   The question, can AI benefit education, would also result, in my view, in an affirmative response.    Is a hammer a threat or something of benefit?   It depends on who is wielding it and for what purpose, the hammer is but a tool although I note that AI is a far more powerful and flexible tool, for good or for ill.

We also need to ask the question of what we mean in terms of education.   Education in its broadest sense, such as when a parent models behaviour for their child, or in the sense of the organisations and constructs of the formal education systems the world over.   My reading of the article leads me to believe that the threat is to the current education system, processes and practices.   This system, processes and its practices have long had those critical of its fitness for purpose in the modern world with the late Ken Robinson being one of these.   His changing paradigms Ted talk dates back to 2010, so 13 years ago.   So maybe a threat to the current education practices may be a good, and possibly overdue, outcome.  After all, little has changed in how the education system works globally in the last 100 years.    Maybe AI is a much needed catalyst for educational change. 

I also note the article didn’t apply a time frame for this threat.   I have seen a post on social media recently suggesting a 50% risk of AI causing a catastrophe resulting in the loss of most human life in the world, where it also didn’t provide a time frame.   Looking far enough into the future you are always going to be able to get to a point where, between now and then, a 50% risk occurs however thinking about global warming, war, political divides, etc, I suspect we will reach a point where there is a 50% chance of human intelligence leading to a catastrophe resulting in the loss of most human life in the world before the same risk in relation to AI is reached, assuming we aren’t already there.

We also need to acknowledge there are other threats to education including challenges providing access to education for all students across the world, workload issues as the education sector continues to seek to improve by adding more requirements and tasks to a teachers role each year and the challenge of teacher shortages.    The solution to these issues is unlikely to involve maintaining the current status quo and current education system, so maybe these issues should also be seen as a threat to the current education system.     AI can be viewed as a threat, but it is far from the only one.

Conclusion

AI has potential to be a threat to the current education systems and processes but maybe a catalyst for change has been needed for some time.   That said, AI could have a negative impact on education however I would suggest it could also have a positive impact too.   The likelihood in my view is that we have bit of both eventualities and some positive and negative impacts however AI is here now and is not going away.   And if strict restrictions are put in place, either people will bypass these or the companies creating AI solutions will simply move to jurisdiction where the restrictions are less strict.   But AI solutions will continue to be created, continue to advance and continue to be used.    My view therefore is that we need to view AI as yet another technology tool, albeit one of the most significant in history, where we need to embrace its use, shaping it to have the positive impact we wish to see it have, while seeking to remain aware of the risks and to mitigate these as much as possible.

So maybe the newspaper articles title should have been: AI is clear and present danger and opportunity for education

Sadly I don’t think the above makes for quite as snappy a headline.

References

AI is ‘clear and present danger to education’ (May 2023), The Times

Good enough?

In the world of education, it is easy to become obsessed with the pursuit of perfection. Teachers and students alike strive for excellence, academic, pastoral, and otherwise, pushing themselves to achieve the best results possible and constantly seeking to improve processes, knowledge and skills.  I look back on over 20 years of working in schools and see all the things that have been added for teachers, school leaders and support staff to do.   All of the various things that have been added have been added for good reason, to improve education or to address risks or dangers, but they largely have all been additions;  additional systems, additional processes, additional statutory requirements, additional school requirements, inspection requirements, compliance requirements, etc.   We cannot infinitely continue to add.    Also, in this relentless pursuit of perfection, it is all too easy to overlook the value of “good enough” and the negative effects that perfectionism can have.

The concept of “perfect” is a subjective one, and what one person considers to be perfect may not necessarily be the same for another. The problem with striving for perfection in education is that it can lead to unrealistic expectations, which in turn can lead to feelings of failure, anxiety, and stress.  It can lead to increasing workload where workload is a considerable issue impacting on educators the world over.    We can become so fixated on getting everything right that we lose sight of the bigger picture, and what really matters and is most important.     And what is most important is equally subjective;  is it academic achievement, developing character, soft skills, sportsmanship, preparing students for future life, supporting student wellbeing, or the many other things which schools are involved in.

I believe the culture of constant addition is doomed to fail us, it is simply unsustainable.   We do not have the resources and this is already clear given ongoing discussions regarding workload in schools. As such we need to look towards what is most important and prioritising.  We need to look towards “doing less” which is one of the principles I have shared with my team in looking to identify the tasks and activities we do that add little value or provide little impact, seeking to cease these or spend less time on them.   Now this is a difficult process as anything which has been added has been added for a reason however not all reasons are equal and the impact and value of all tasks and activities are also not equal.   And this is what is hard in comparing tasks and identifying which are worthwhile to continue and which can be ceased or reduced, while acknowledging that ceasing any tasks will result in a negative impact; Remember we started a task for a positive reason, so ceasing or reducing time on it can only reverse this; a negative impact.   But we need to start to reverse the culture of addition before we reach a tipping point, before the workload crisis goes beyond where it already is.

In terms of the difficult task of prioritisation I always come back to values;   A schools values should help guide on identifying that which is important and which adds value, therefore helping in identifying the things it might be possible to cease doing.    And if not ceasing doing things it should help in identifying priorities and allocation of resources so rather than stopping something, we may simply do less of it. These are the difficult discussions which need to happen, identifying how to divide up the limited resources available, and what areas or tasks cannot be done, should not be done, or will see less resources to make way for other things.

In schools and colleges we want to do the best for our students but maybe in seeking to do so we need to recognise that best does not mean perfect as this simply isnt possible;   the resources, the staff, the time, etc will never be sufficient to be perfect.   Therefore do we need to become comfortable with “good enough”?    I feel as a manager of an IT support function that this is the right thing to do although equally as an educator I am uncomfortable with it from a student and a learning point of view, where I would want to deliver the best possible learning experience.    But maybe the discomfort is unavoidable, and better to work with good enough than to try to be perfect across too many areas of education, the pastoral, academic, wellbeing, health, fitness, etc, such that we fall significantly short of even good across all of them.  

The above is a bit of a rambling chain of thoughts but in terms of sharing my thoughts, concerns and ideas, hopefully it is Good Enough!

100+ years of exam halls and paper exams

And so, the exams season is in full flow with students across the world once again sitting in rows in exam halls, which are often simply school sports halls, with pen and paper to complete their end of course GCSE and A-Level exams.   If you looked at the halls the setup might be very much similar to exams from 100 years ago or more albeit education is now more accessible to the masses and exam halls now contain posters about “mobile devices” and how these are prohibited.    How is it possible that the exams process has changed so little?

Lets consider the wider world;  I asked ChatGPT for the significant technology advancements from the last 100 years and it came up with the below:

Computing and Information Technology:

The development of electronic computers and the birth of modern computing including the emergence of the internet and the World Wide Web, revolutionizing communication, information sharing, and commerce.

Transportation:

The rise of commercial aviation, making air travel accessible to millions and facilitating global connectivity along with the development of high-speed trains and advanced railway systems, enhancing transportation efficiency and connectivity.   Also, the proliferation of automobiles and the continuous improvement of electric vehicles and autonomous driving technologies.

Medicine and Healthcare:

The discovery and widespread use of antibiotics, dramatically reducing mortality rates from bacterial infections along with the development of vaccines against various diseases, leading to the eradication of smallpox and the control of many others.   Additionally, advancements in medical imaging technologies, such as X-rays, MRI, and CT scans, enabling non-invasive diagnosis and improved treatment planning plus progress in genetic research and biotechnology, including the mapping of the human genome and the development of gene therapies.

Space Exploration:

The first human-made object in space, the launch of Sputnik 1 in 1957, and subsequent manned space missions, culminating in the moon landing in 1969.    The establishment of space agencies like NASA, ESA, and others, leading to significant advancements in space technology, satellite communications, and planetary exploration.   And more recently the development of reusable rockets, such as SpaceX’s Falcon 9, reducing the cost of space travel and opening up opportunities for commercial space exploration.

Energy and Sustainability:

The expansion of renewable energy sources, including solar and wind power, as alternatives to fossil fuels plus improvements in energy storage technologies, such as lithium-ion batteries, facilitating the growth of electric vehicles and renewable energy integration.   This combined with a greater focus on sustainability and environmental awareness, driving innovations in energy-efficient buildings, green technologies, and eco-friendly practices.

Communication and Connectivity:

The evolution of telecommunications, from landline telephones to mobile phones, and the subsequent development of smartphones with advanced features and internet connectivity.   Also, the introduction of social media platforms, changing the way people connect, share information, and communicate globally and the advancement of wireless communication technologies, such as 4G and 5G, enabling faster data transfer, enhanced mobile connectivity, and the Internet of Things (IoT).

Conclusion

A lot has changed over the last 100 years, with a lot of the above occurring maybe in the last 10 to 20 years, yet in education we are still focussed on terminal exams like we were over 100 years ago.   We still take students in batches based on their date of birth and make them sit the same exam at the same time.    These exams are still provided as a paper document with students completing them with pen or pencil while sat in rows and columns in sports halls in near utter silence.  The papers are then gathered up and sent away to be marked with results not available for almost 3 months.

The above might have been ok 100 years ago but with the modern technology available to us now surely we should have made some progress.    I suspect, although there have been those who have suggested change, there hasn’t been a catalyst to drive it forward.   My current hope is that recent advancements in Artificial Intelligence (AI) and recent discussion regarding its use and potential, may be the catalyst we need.   Here’s to not still using the same exam processes 10 years from now, never mind 100!

An AI divide?

Artificial Intelligence (AI) is the big talking point at the moment with all its many potential benefits, along with some risks and challenges.   One of the challenges however that doesn’t seem to have been discussed as often is that of digital divides, where AI might represent yet another divide been the haves and have nots and the can and can nots.

Digital Divides

The term digital divide refers to gaps between people or communities who have access to and use of digital technologies such as computers, smartphones, and the internet, and those who do not. This gap can be attributed to a variety of factors, including socioeconomic status, geographic location, age, race, and education level.

Before considering AI as an additional divide, there were several different types of digital divides that exist. Some examples include:

Access divide: This refers to differences in physical access to digital technologies, such as lack of broadband internet access in certain areas, lack of availability of computers or smartphones, or lack of access to digital skills training.

Usage divide: This refers to differences in how people use digital technologies, such as differences in the types of devices people use, how often they use them, and what they use them for.

Skills divide: This refers to differences in digital literacy and skills, such as the ability to use digital technologies effectively and safely, the ability to access and evaluate online information, and the ability to create and share digital content.

Content divide: This refers to differences in the availability and quality of digital content, such as differences in access to online educational resources, news and information, and cultural and entertainment content.

Economic divide: This refers to differences in the economic benefits and opportunities that digital technologies can provide, such as differences in access to online job opportunities, e-commerce, and digital financial services.

The AI Divide

Artificial Intelligence represents a potential additional divide although the issues may sit under the above divides.   Access to Artificial Intelligence solutions, the relevant skills and understanding to make the appropriate use of AI and the resources to make use of AI.     Personally, I present AI as a new additional divide rather than one contained in the above due to what I see as the wide ranging potential impact which AI can have on the world as it is now.   In my area, that of education, I feel this is particularly relevant.  Aside from student access to technology, skills, etc, there are some schools who will seek to explore use of AI solutions whereas in other cases there may be a drive to block, filter or control access.

Considering the divide that AI may create I can see issues for those who do not have access or do not have the skills to use AI.   Those who do may become more efficient through the use of AI to carry out more mundane tasks or to provide the basic starting point for task rather than users having to start from scratch.   The likes of the 30mins challenge which shows how much more might be possible through the use of AI tools illustrates this nicely.   From a creativity point of view, AI might as Dan Fitzpatrick has said “democratise creativity” meaning those who can and do use AI may have greater potential for creative outputs than those who do not or can not use AI.   And that is but two areas where AI use and understanding may create a divide, and I suspect there are many others.

Conclusion

We wish there to be equitable treatment for all however the ongoing discussion of digital divides highlights, although progress may be being made, that we aren’t there yet.   The increasing discussion and use of Artificial Intelligence adds yet another factor which can create a digital divide and therefore negatively impact on equity.   We need to be conscious of this in the same way as we are conscious of the other challenges of AI including bias, attribution, accuracy, etc.

Cyber: What is a sophisticated attack?

Looking at news reports or alerts in relation to cyber incidents in schools they often refer to “sophisticated” attacks.   This got me wondering about what a sophisticated attack, rather than a simple attack, might look like.   Are the attacks quite as sophisticated as is suggested?

I decided to make use of AI and ChatGPT for its view on what might represent a sophisticated attack and it suggested the below:

Advanced Persistent Threat (APT) attacks – These are long-term targeted attacks that involve a high degree of planning and coordination, often carried out by state-sponsored actors or well-funded criminal organizations.

Zero-day exploits – These are attacks that target previously unknown vulnerabilities in software or hardware, which can allow attackers to bypass security measures and gain unauthorized access to systems.

Social engineering attacks – These attacks involve the use of psychological manipulation to trick users into divulging sensitive information or performing actions that could compromise security.

Ransomware attacks – These are attacks that involve the use of malware to encrypt data and demand payment in exchange for the decryption key, often targeting critical systems such as hospitals or government agencies.

Distributed Denial of Service (DDoS) attacks – These attacks involve overwhelming a target system with a flood of traffic from multiple sources, effectively rendering it unusable.

Considering the above I would suggest the APT attack is likely to be very sophisticated being largely nation state in nature, however these types of attacks are more likely to be targeted against national infrastructure, key research, government offices, etc rather than schools.     I would suggest the creation of zero-day exploits is also very sophisticated in experimenting and identifying the exploit in the first place, however once exploits are shared it is not so clear.  Some exploits are likely to be sophisticated and require significant work and expertise however other exploits may be much simpler in nature and therefore require little more than the shared instructions in relation to exploiting the identified vulnerability.

Social engineering attacks are likely the most common impacting schools and like the discussion above in relation to zero-day exploits, some social engineering attacks may simply involve a phishing email whereas others may involve more sophisticated recon and intelligence gathering, followed by creation of cloned websites and a spoofed email.   As such the level of sophistication can vary.   

In terms of ransomware again the level of sophistication might be varied with some ransomware simply encrypting the data it can get to, while others might exfiltrate data or seek out and encrypt backups.  The is also the issue of how the ransomware gets delivered, whether this might be via social engineering attack or a zero-day exploit, with the delivery method also having variable sophistication.    And the same can be said for DDoS, in terms of the number of hosts leveraged in the attack, how the hosts were compromised and controlled, etc, meaning sophistication can be varied.

From the above is seems clear that the different attack methods each have a variable level of sophistication possible, plus also that a single attack might using multiple attack types such as a combination of social engineering leading to the ability to leverage a zero-day exploit and then deliver ransomware for example.    So maybe the repots and warnings of “sophisticated” attacks may be justified.

But what if we look at the situation through a different lens and perspective.    Let’s consider cyber attacks as less targeted and more of a general attack across a mass of organisations or schools.   An attacker might start with a phishing emailing as I mentioned above, sending this out to a large number of organisations based on email addresses gathered from the dark web.    The 2020 Phishing Benchmark Global Report suggested on average 13% of users would click on any phishing email, so this could lead to attempted credential compromise or delivery of malware.     Defensive measures in place may protect some users here, such as through MFA, EDR, etc, so the actual successful compromise rate is less than the 13% of all users who received and fell for the email.    Let’s assume 75% of the attempted compromises are blocked, so this leaves us roughly 4% of all recipients where credentials are compromised or malware delivered.    At this point the cyber criminal is now focussed on this 4%.    They might now check the permissions level of all the compromised accounts to see if they have got any admin accounts or look for those particular organisations which represent juicier targets, which I would suggest includes schools and colleges.    They might also now try a Business Email Compromise (BEC) attack from the compromised accounts, seeking to try and get access to an admin account.    The process could be iterative with each step relatively simple.   A phishing emailing sent to many.   An automated BEC attack from the accounts compromised in the first attack.  A further BEC attack in the new accounts gained from this second attack, and on and on until admin credentials are compromised, and ransomware delivered for example, or until a interesting or high value target is identified among the compromised credentials.

Looking at the above the attack is a simple iteration but viewed from the point of view of the organisation who eventually suffers a ransomware infection it seems complex and “sophisticated”.   The ransomware was installed and encrypted data, with this being the result of compromised admin credentials from a BEC attack, which in turn can be tracked back to another BEC attack and an originating phishing email.   As a 4 step or more process it seems to be fair to consider it “sophisticated”.    But consider that the attacker began with 100,000 targets, whittling this down with each successive step to the one or small number of eventual victims.   At each step, where users were suspicious, where defensive measures were successful or where the organisation was simply lucky they manage to avoid the attack, but probability would suggest at least a few organisations would remain and become victims.    With this view it may be possible to consider the attacks as less surgical and “sophisticated” and more brute-force and a matter of probability.

Conclusion

I suspect the description of “sophisticated” can be considered appropriate looking at the multiple steps which may have been involved however I think it wrongly gives an impression of the cyber skill and expertise level which was involved.   Now I acknowledge in some cases the attacks will have been complex and involved high levels of skills, however I would suggest in most cases this isnt true;  The attacks involved multiple simple attacks made against large number of organisations at once, iterating down to a small number of eventual victims through a process of elimination.   

Another way to look at this situation is to consider how “sophisticated” may suit the narrative of leaders and marketing staff so it may be more about shaping the perception of the attack than an actual assessment of the attack itself.   And thinking about it, maybe this is the truth of the matter;  who is going to own up to being subject to a “simple” attack so maybe its no wonder that most, if not all, reported incidents are considered “sophisticated”. If this is true, then describing attacks as “sophisticated” in general press releases or alerts, although possibly appropriate to do, doesn’t actually tell anyone about the nature of the attack.

References:

2020 Phishing Benchmark Global Report (2020), Terranova Security & Microsoft.

Some wellbeing thoughts

Following on from my post from a week or so ago, when I was sharing feeling a little bit low, I thought I would share some thoughts on wellbeing.  I note I am currently in a slightly better place than I was having re-established some positive habits such as reading and running plus being a bit more conscious of and seeking to better manage my mood.

So what is wellbeing?    I think this is key to establish what it is as it is multifaceted involving taking care of your physical, emotional, and mental health.  I found a diagram which talked of deep health including physical, emotional, mental, environmental, relational and existential elements which might be a useful model.  But the key, no matter the model used is that the elements are all inter-related.   I remember reading about an experiment where researchers asked their subjects to hold a pencil in their mouth with some asked to have the pencil length wise, thereby forcing a smile, or endwise, so forcing a kind of frown.   When they asked how the individuals felt those who had the pencil length wise, which forced their mouth to assume the shape of a smile, provided more positive responses than those who held the pencil endwise.   Also when I think about my running, if I am not in a good place mentally or emotionally, I struggle and tend to run slower, while when I have a good run I generally feel better.   So basically physical events can impact on emotions rather than always being the other way around, and vice versa; e.g. you feel good so you are more inclined to smile, or you smile/laugh and feel better.   Physical, emotional and mental are inter-related.   And here in lies the challenge in wellbeing, it involves a number of inter-related facets so managing your own wellbeing isn’t easy.  

My recent challenges highlight this.   If your mood isn’t in a good place you are likely to feel less happy and emotionally drained which means you are less inclined to smile, which reinforces feeling emotionally drained.  Being drained you are then less likely to engage in physical exercise, so become less active and healthy, with this in turn likely to result in more emotional negativity.  Basically its a negative spiral.

A positive spiral is also possible where you get into a habit of physical exercise, which makes you feel more emotionally positive and balanced, leading to more smiles and laughs, which in turn make you feel better.   You are also more likely to engage with other people and social contact with this again leading to more emotional positivity.

The above positive and negative examples are however extreme and the reality is we spend a lot of our time in a delicate balance.   We might have the physical exercise bit sorted with regular runs which makes you feel good and healthy but due to limited time you are not challenging yourself mentally through reading for example, which therefore has a negative emotional toll as you are aware of the lack of reading.   So you allocate more time to reading but then find you are spending less time with family or on exercise, so feel better for the intellectual challenge but feel worse of for the reduction in social contact and in exercise.    We want our wellbeing to be rather stable albeit positive but the reality is it is a constant rollercoaster in need of monitoring and management.  

Ideally you hope to have positive wellbeing but the reality is that your wellbeing will fluctuate with your efforts, successes/failures, with interactions with others and with local and even national events, among other factors. You will have occasional negative spirals and positive one. The reality is far less even than we would like as I have tried to indicate in the below diagram:

So what are some of the things we might consider in seeking to manage our own wellbeing:

  • Exercise regularly: Exercise is one of the best ways to improve your physical and mental health. Regular exercise can help reduce stress, boost your mood, and improve your overall health.
  • Eat a balanced and healthy diet: Eating a balanced and nutritious diet can help you maintain a healthy weight, reduce your risk of chronic diseases, and improve your energy levels.  Note: Balance includes some enjoyable food and drink where I count my Irn Bru as part of this equation.
  • Get enough sleep: Getting enough sleep is essential for your mental and physical health. Aim for at least seven to eight hours of sleep each night.
  • Manage stress: Stress can have a negative impact on your physical and mental health. We also need to note that challenge, or good stress exists and is an important part of our wellbeing in the need to feel successfully.    We therefore need to seek out challenge and things which push us to achieve while finding healthy ways to manage negative stress, such as through meditation, yoga, or deep breathing, or whatever you find works for you personally.  
  • Connect with others: Social support is crucial for maintaining good mental health. Spend time with family and friends, join a club or group, or volunteer in your community.
  • Practice self-care: Self-care involves doing things that make you feel good, such as taking a warm bath, reading a book, or going for a walk. Make time for self-care activities each day.
  • Seek help when needed: If you’re struggling with your mental health, don’t hesitate to seek professional help. Talk to your doctor, a mental health professional, or a trusted friend or family member.

Remember, managing personal wellbeing is a process that requires consistent effort and self-awareness. By taking care of your physical, emotional, and mental health, you can lead a happier and more fulfilling life but I think we also need to accept that things are not always positive and that we therefore need to manage the negative when it arises.

Wellbeing, like so many things in life, is messy and cant be distilled into a simple list.