2023 Exam Results: A prediction

And so exam results day once again approaches and I would like to share a psychic prediction: That the newspapers will be filled with headlines as to how A-Level results have fallen when compared with last year. 

Ok, so it isnt so much psychic as based on what we know about the UK exams system.    We know that each year the grade boundaries are adjusted and that the trend pre-pandemic was for grades generally to be increasing year on year.    The ever increasing grades werent necessarily the result of improving educational standards or brighter students, although both of these may or may not be the case, they were the result of a decision taken when setting grade boundaries.    With the student exam scores available, the setting of the grade boundaries decided how many students would get an A*, an A, etc and therefore the headline results.    It’s a bit like the old goal seek lessons I used to teach in relation to spreadsheets.   Using Excel I could ask it what input values I would need to provide in order to attain a given result.    So, looking at exam results, what grade boundaries would I need to set in order to maintain the ever increasing grades but while also avoiding it looking like grade inflation or other manipulation of the results.  Now I note that in generally increasing grades across all subjects, some subjects showed more improvement than others, with some subjects showing dips, but summed across all subjects the results tended to show improvement year on year.

And then we hit the pandemic and teacher assessed grades and the outcry about how an algorithm was adjusting teacher awarded grades into the final grades they achieved.    Students and parents were rightly outraged and this system of adjustment was dropped.   But how is this much different from the adjustment of the grade boundaries as mentioned above?     The answer is quite simply that the teachers and often students and parents were aware of the teacher assessed grades and therefore could quantifiably see the adjustment when compared against the awarded grade.   When looking at the pre-pandemic exams teachers, students and parents don’t have visibility as to what the students grade might have been before adjustments were made to the grade boundaries.    They simply see the adjusted score and adjusted final grade.  Now I note that a large part of the outrage was in relation to how the grade adjustment appeared to impact some schools, areas or other demographics of students more than others, however I would suggest this is also the case when the grade boundaries are set/adjusted, albeit the impact is less obvious, transparent or well know.

So, we now head into the exam results following the period of teacher assessed grades with students back doing in-person exams.    Looking at this from an exam board level, and reading the press as it was after the 2022 exam results, we know that a larger than normal increase was reported over the teacher assessed grade years, with this being put down to teacher assessed grades versus the normal terminal exams.   As such I would predict that the exam boundaries will be set in such a way to make the correction.    I predict the exam boundaries will therefore be set to push exam results downwards although it is unclear how much the results will be pushed down.     It may be that the results are reduced slightly to avoid too much negative press or it may be that a more significant correction is enforced based on the fact that this might be easily explained by the previous teacher assessed grades plus also the lack of proper exams experience held by the students who sat their A-Level exams this time;  remember these students missed out on GCSE exams due to the pandemic.

Conclusion

My prediction is that the exam results stats will be lower than last year but not due to students necessarily doing worse, but due to a decision that the results should be worse given last years apparently more generous results plus the fact these particular students have less exam experience than previous years, pre-pandemic.   I suspect my prediction is all but guaranteed but an interesting question from all of this has to be, is this system fair?   I believe the answer is no, although I am not sure I can currently identify a necessarily fairer system.  But I think in seeking a better system, the first step is to identify the current system isnt necessarily fair.

And one more final thought:  To those students getting their results:   All I can simply say is very well done!  This was the culmination of years’ worth of study and effort, and during a period of great upheaval the world over, unlike anything in my or your history to date.   No matter the grades, you did well for getting through it.   The grades, no matter what they are do not define you, but your effort, your resilience and what you decide to do next, your journey is what really matters.    Well done and all the very best for the future!! 

AI, bias and education

Lots has been written about the risks and challenges in relation to artificial intelligence solutions including the risk of bias.    There hasn’t been so much written that specially explores these risk in relation to the use of artificial intelligence solutions within education.    As such I would like to share some thoughts on this starting specifically with the risk of bias and how this might impact on education, teachers and students.

Bias in AI systems

AI systems will generally be provided with training data which is then used by the system in generating its output.    The quality of this training data therefore has a significant impact on the usefulness of the resulting AI solution.    If we provide the system with biased training data, such as an unrepresentative amount of training data relating to a specific event, group or other category, this will result in a biased output.    An easy example of this relates to the poor ability for AI based facial recognition systems to identify people of colour.   This likely relates to the fact these solutions were created by largely western white individuals who therefore used training data which had an unrepresentative number of western white faces.     The challenge however is that humans tend to be biased, albeit often subconsciously, so therefore there it is almost guaranteed that some bias will be intrinsic in the training data provided, but that this bias may be difficult for us to identify.

So what might the impact be in relation to education?

Recommendation Systems

One of the areas where AI has been used for some time is in recommendation systems such as Google Search or the “you might like” on shopping sites like Amazon.   We will likely see similar systems in education which will recommend subjects or topics for students to study or may even recommend future study paths from secondary into FE and then onwards into HE.    But what if these solutions include bias?  I would suspect a gender bias would be the most likely to occur in the first instance, as the AI solution tries to mirror the real world training data it will have been provided, where the real world itself still continues to be biased, advantaging males over females.    This would also cause a significant problem in relation to how AI systems might respond to individuals which identify as non-binary given there would be little training data relating to non-binary individuals.   What suggestions would it provide when the vast majority of data it has related to males or females only?

Learning Systems

Expanding on recommendation systems, we also will have learning systems which gather data on students as they interact with learning material, providing real time feedback and support, plus guiding students through learning materials specifically selected to meet the needs of the individual student.   It will not be obvious how these systems arrive at their output however this output might include selecting content based on its difficulty or challenge level, or providing support and advice based on the identified needs.   What if there is bias in the training data which leads the AI to tend towards providing overly difficult or overly easy content to a specific subset of users?   Note, this subset of users could be as simple as a gender, users in a specific location or ethnicity, however more likely will be a complex categorisation that we may not fully understand.    The key issue here is that some students would be receiving more or less challenging learning content, or more or less support or advice as a result of biased decision making within the artificial intelligence solution.     How might this impact on students, their learning and their achievement?

Academic stagnation

Again, building on the above, we need to recognise that AI solutions are probability based.   They use the training data provided and then use probability based decision making to identify their outputs and actions.   This use of probability means that output and decisions tend towards the average and the statistically most likely.    In terms of education this might mean that AI solutions will equally tend to reinforce the average so students in a school where previous students have done historically below national average may be supported by AI solutions to achieve similar results, the historical average for the school, even where the individual student ability or even the ability of a given year group is above this national average.    Looked at broadly across all education the world over, AI used in teaching and learning, may tend to focus on a global average, which may disadvantage those who are capable of more than this.   It may lead towards more equitable access to education, but it may also lead to a stagnation as all educational efforts tend towards an average.

Divergence

We touched briefly on this earlier, but it also relates to stagnation and a tendency towards the average.   AI solutions are provided training data and make decisions based on this, so there is a tendency towards an average but what if students diverge from this average?    The lack of data specifically relating to these individuals will mean the AI will tend towards the probable and providing advice or directly students according to how the “average” student might perform, which may be inappropriate for these divergent students.    Consider an AI based learning platforming selecting content and providing advice based on the “average” student but where the student using the system is neuro-divergent?   Is the content and advice likely to be appropriate for these students?   What might the impact on the student, on learning, on their mental health, where being presented with inappropriate learning path ways, support and advice?

Reinforcing Bias

Where AI solutions are generating the learning content themselves based on individual students needs we also need to be conscious of how this might result in the reinforcement of stereotypes and bias.   What if the AI solution has to create an image for a criminal, a nurse, or childminder or lawyer;  Is there the potential for the images the AI presents to reinforce gender, ethnic or other biases which already exist, and therefore which are highly likely to exist in the training data?

Conclusion

Based on the above it is clearly right to consider the above risks.   We need to be conscious of these risks such that we can try to mitigate against them by carefully reviewing the training data being used, and by ongoing review of AI performance.   We also need to consider where in some circumstances it may be necessary to have separate AI solutions, with separate training data, for use in certain situation.    Although these risks need to be considered we also need to remember that in the absence of AI solutions in education, it has been humans which have made these decisions.    And humans aren’t devoid of bias, we just happen to largely be unconscious to it.   It is easier to identify bias, or other incorrect or irrational behaviours in others, including in AI systems, than it is to identify it in ourselves.   We therefore need to be careful to avoid holding AI up to standards that we ourselves have never been able to meet.

I wonder whether in seeking to address bias in AI solutions the first thing we may need to do is step back and acknowledge the extent of our own human bias both individually and collectively.

Defining AI

This week I want to continue the discussion of Artificial Intelligence, posing the difficult question of what AI, by definition, actually is.  

The artificial element of artificial intelligence is reasonably clear in that the intelligence is artificially rather than biologically created.   Programmers were involved in developing software code thereby creating the Artificial Intelligence solution.   AI doesn’t arise out of biological evolutionary processes, although it might be possible to suggest that the ongoing development of AI solutions might be evolutionary.

But what about “intelligence”?  

What is intelligence?    There are differing definitions of intelligence.   A google search yields a definition from Oxford Languages which refers to “the ability to acquire and apply knowledge and skills”.    It would appear clear that an AI solution can acquire knowledge in the form of the data it ingests and in the statistical processing which allows it to infer new knowledge.   We have also seen robotic AI solutions which have learned physical skills like the ability to walk.   So, from this definition it appears that these solutions may show intelligence.   That said, does the AI comprehend the meaning of the text it outputs in response to a prompt?   Does it feel a sense of success, and are feelings and emotions a part of intelligence?    And does it “acquire” this knowledge or is it simply fed it by its designers and users?  Does it also choose what to acquire and what outcomes it wants or does it just do only as its programmed?

Evolutionary intelligence?

Another definition for intelligence, which has a more evolutionary bias, states that “Intelligence can be defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor”.   This links to Darwinism and the survival of the fittest in the benefit towards the actor.    It may be that current AI solutions can solve complex problems, such as identification of patterns and anomalies in huge data sets, however it is also possible to evidence where AI solutions fail at simple tasks we humans find easy, such as object recognition and spatial awareness.    As to the actions of the AI benefiting the actor, if we assume the actor is the AI itself, I am not sure we can evidence this.   How does the AI benefit from completing the task it is set to?    I suppose we could argue that the AI is completing a task for a user and that the user is the actor receiving benefit, or we could suggest that by benefiting the user, the AI as actor is more likely to continue to see use and develop which could be considered an act of self-preservation.   But is the AI conscious of benefit?   Does it even need to be conscious of benefit?  Is it conscious of a need for self-preservation?   But then again are we humans conscious of our own need for self-preservation or the personal gains which may motivate us towards seeming selfless acts?

Mimicry

The issue here for me is that I am not sure we are clear on what we mean by artificial intelligence in that the term intelligence is unclear and may mean different things to different people.    I suspect the term AI is adopted in that AI solutions are able to mimic average human behaviours, such as being able to respond to an email based on its content, being able to analyse data and suggest findings or being able to create a piece of artwork based on the work of others.   We just substitute “mimic some human behaviours” for “intelligence”.   In each case the AI solution may be quicker than we humans or may produce better outputs, based on the averaging of all the training data an AI has been exposed to.  In each case, and due to the training data, the outputs may be subject to inaccuracy and bias;   And maybe this may support the use of the term intelligence in the inaccuracy and bias we display as humans being so clearly mimicked by the AI we create.

A task focus

Looking at the definition of “artificial intelligence” in its entirety, Oxford Reference refers to “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.   This definition, of tasks normally requiring humans seems to fit given without AI it is for humans to respond to emails, create artwork, etc.   So maybe AI is simply a system which can mimic humans in terms of its ability to complete a given task and produce a favourable output.

Conclusion

I think it is important we acknowledge the vagueness of AI as a term.  But then again AI is simply a subset of different types of intelligence including biologically developed as well as human created Intelligence.    And if we struggle in creating a consistently adopted definition for intelligence, it is therefore of little surprise that our definition of AI is no less vague.   But maybe this is all semantics and the focus should simply be on developing solutions which can carry out tasks previously limited to humans, and by extension human intelligence.  

Considering human intelligence one last time, we need to remember that a child may show intelligence in speaking its first words, or learning to stand, meanwhile an adult explaining chaos theory or performing an orchestral piece will also be showing intelligence.    That’s a fairly large range of intelligences.    And it is likely with AI, the range of intelligences will be equally broad with our current AI solutions, including generative AI, being near the infant side of the continuum.

Before finishing I also need to raise the challenge in relation to mimicry of human efforts to complete tasks, where AI may mimic our behaviours all to well.  It shows bias, a lot like humans do.   It also states with confidence facts that are either untrue or have limited supporting evidence, much like humans do.   It is subject to exterior influence though its inputs and training data, again much like humans, and it creates “original” works based on the works of others but without a clear ability to reference all that which it has learned and based its outputs on, again exactly like we humans.   This all represents a challenge where I see people trying to hold AI solutions to a standard that we humans would find difficult or even impossible to achieve.

For now, I think we need to accept the vague definition of AI and for me this is a system which can complete tasks which would normally require some form of human intelligence, where inherently this system also tends to mimic some of the drawbacks of the human intelligence it seeks to copy.  Its not perfect but it will do for now.

References:

https://www.google.com/search?q=definition+intelligence

Artificial intelligence – Oxford Reference

Q&A – What Is Intelligence? (hopkinsmedicine.org)

What does the future for schools and AI look like (The risks and challenges)?

My last post looked at the future of schools now we have widespread use of generative AI, taking a generally positive viewpoint.   This post will now reverse this looking at the challenges and risks and taking a more pessimistic stance on AI in education.

Personalised Learning, for those with access

AI does have great potential in schools and in education however I feel it will highlight the already existing digital divide.  Not all students have access to a device with which to use AI tools.   Not all schools allow generative AI such as ChatGPT to be accessed plus schools have varying degrees of IT infrastructure and support.    Additionally, some schools will be more forward looking and will already be talking to staff and students about AI and generative AI, while others will have yet to have broached or even considered the subject.     As such the ability to access, to understand and to use AI positively will be varied and not uniform for all individuals.    AI might therefore serve to widen the digital technology gap which already exists, with those with access to technology, infrastructure and support benefitting from the personalised learning of AI, while those without languish further and further behind.

Lacking diversity

We also need to consider how AI works and the training data it has been provided.    Much of the AI development will have happened in the western world, where technology staff still are more often English speaking, male and white.    This creates a bias in the resulting AI solutions with this bias being widely reported.   Fundamentally our current generative AI uses its training data to generate the output it creates, with this training data largely coming from the internet itself and with the process based on statistical modelling.  This results in AI outputs which tend towards an average or statistically probable output based on the available training inputs.   How does this then impact on those who may stray from this statistically average person or response?    A perfect example of this lies in an email I saw shared on social media (see on twitter here) where an individual responded indicating they would prefer a human rather than an AI generated response.   It turns out the email was human generated and the original sender proceeded to indicate “I’m just Autistic”. 

What might the broader impact of AI, trained to output based on an average person or response, be on those who are neuro-diverse and therefore differ from the average or norm?     I will admit this issue isnt new and can be seen in the Google search engines AI based responses.   It tends to towards the common answers, those that in probability are most likely to be those favoured by users in general.   The less common or more divergent answers, opinions and views are buried further down the search responses, likely on later pages where generally few people ever go.   But remember your search engine provides a list of links to look at, whereas generative AI tends to provide but a single response to consider in the first instance.  So the issue may be more evident where generative AI is in use.

A collapsing model / Homogeneity

Another challenge in relation to AI is its need for more and more training data to improve.   The issue will be that generative AI will become increasingly responsible for the content being published online which in turn is commonly used for the training of AI solutions.   We will have a situation where AI solutions generate content, which is then ingested as training material by AI leading to more content.   And as AI starts to learn from itself, and in relation to generative AI’s tendency to move towards an average response, the AI models may weaken and fail.   It’s a bit like genetics and having a limited gene pool leading to limited diversity and lack of ability to adapt to environmental change.     This in turn could proceed to only deepen the issue of AI solutions lack diversity in their ability to support a diverse user base.  

Blackbox AI

The black box nature of AI solutions is yet another issue which could be considered a risk.   AI solutions are generally black box solutions so we cannot see how they arrive at their final output.   This means that we may be blind to bias, inaccuracies, etc which exist within an AI solution as the result of the training data is has had access to.   AI platforms may be constantly reviewing student activity, and from this may categorise students in ways we don’t understand, then responding with learning content influenced by bias intrinsic to the training data.    From the point of view of schools, where a students learning, future learning and future lives are at stake it represents a concern and risk if we are unable to understand why a particular student was provided a certain learning path over other options.     What if the AI, through bias, identifies a student as lower ability and therefore proceeds to offer low challenge content?

Convenience and laziness

As a benefit, AI solutions like ChatGPT can make things easier as we can easily get a generative AI solution to draft an email or a document;  It is simply more convenient, faster and requires less effort on our part, but the risk is that we become lazy through this.   There is already a bit of a panic about students simply using generative AI to create their coursework and homework for them.   We may also become overly reliant on these solutions for our answers and less able to think for ourselves, we may also become less likely to stop and question the responses we receive.   And guess what, this isnt new either.   We already see this in social media where I recently saw a post based on an article which referenced a piece of research.   On social media some individuals jumped on the content of the article and what it said about the findings of the research, but upon further inspection the research made no such findings.   Convivence in accepting the summary of the findings, from the article, had overtaken proper checking of the source material to check the summary was correct.     And with AI solutions becoming more common, and even supporting the creation of video, we likely need to be more questioning now than we have ever been in the past.    But maybe there is an opportunity here if the convivence frees up time, which is then used to allow us to be more questioning and critical;   I suspect this is me being hopeful.

Data Protection

The DFE guidance states that schools should not be providing personal data to AI solutions.   This is due to the risk in relation to AI and data protection.    If the AI learns from its inputs, then it might be possible to provide prompts which therefore surface these inputs;  So if we entered student data such as exam results, it might be possible for someone else to write a prompt which would then result in the AI providing them with this data, even if they have no legal right to access this data.    There is also a similar risk, if the prompts and data we provide forms part of the overall training data, that a data breach of the AI vendor would result in the data being leaked to the dark web or being otherwise used by criminals.

We also need to consider the long term safety of our students.   If an AI vendor has large amounts of data on students is there potential for the vendor of this data to share or use the data in a way that is not in line with our expectations as educational establishments.   What if the data is sold to an advertising company to help better target students in relation to marketing campaigns, even providing individualised targeted adverts based on data gained as they worked through AI learning content?    What if the data is used by governments to target members of society that don’t fit their blueprint for the ideal citizen?    Am thinking about Orwell’s 1984 here, which may be a bit of a stretch, but if we are providing student data to AI solutions, or putting students in front of AI solutions we expect them to use, how are we making sure their data is protected.

Conclusion

I have tried to avoid the “beware AI will take over the world” and/or “AI will kill us all” message.   I have tried to focus on education and some of the things we need to consider in our schools.   The reality is that AI is here today and will only get better, that it has many potential advantages however there are also risks and concerns we need to be conscious of.   We cannot however be so worried that we sit, discuss and legislate, as by the time we have done this, AI solutions will have already moved on.  

For me, we need to engage with AI solutions in schools, seek to shape the positive use, while being aware and conscious of the risks and challenges that exist.

What does the future for schools and AI look like?

I have previously written about the future and cyber security for schools, so I thought it might be equally useful to consider Artificial Intelligence (AI) and schools and what the future might look like given we now have all of these generative AI tools available at our fingertips and the fingertips of our students.

Personalised Learning (for students and staff)

This for me is the key advantage in that an AI solution, gathering data on a students every interaction with online learning content, can then provide individualised feedback to that student.   The current classroom model of a teacher and a number of students, suggests that each students only get a fraction of the available teacher time no matter what strategies are employed.   But with an AI solution, each student would get the full attention of their own online AI based tutor.   Khan Academies Khanmigo gives a taste of what this might look like.   Now the likelihood is this will first impact on the core subjects such as Maths, English and Science, the subjects for which there are already a large number of learning platforms, with inbuilt content available, albeit without the AI personal tutor element.    After this I suspect we will see its growth into the other subject areas however at a slower rate.

And why should this personalised experience be limited to students.   Couldn’t it also provide personalised professional development content, curate research materials on pedagogy based on your interests, link you up online with colleagues in others schools for support and ideas, etc, providing regularly updated recommendations to help your professional development journey.   

A personalised learning experience may also free up some time within the curriculum, plus free up teachers, to focus on the things which AI is not able to address.    This might therefore allow for more discussion as to the impact of technology on our lives, looking at digital citizenship.   It might provide time to consider and discuss human characteristics and issues such as wellbeing, mental health, equality and diversity and resilience to name but a few.  

Personalised AI based learning will also enable real time feedback to parents, giving a much more detailed and regularly updated report on a students progress, their strengths and areas for development.    In turn this will help with teacher workload as this will reduce the need for the regular writing of reports which are sent home, thereby freeing teachers up to focus on teaching and learning.

Personalised Learning for students with special educational needs

Linked to the above is the potential for students to receive additional AI based support where they have special educational needs.    Through looking at the data associated with students interactions with a learning platform, an AI solution might be able to highlight possible learning needs at an earlier stage than a teacher may be able to do;   This is simply due to the Ais ability to focus on each individual student plus the wide variety of data it would have access to.    Upon identification an AI platform might then be able to appropriately provide advice, guidance and additional support in line with the students needs.    And this would be available to every student.    This is one significant advantage of AI within education, the ability for it to scale up personalised 1:1 learning content and support.

Creativity

We have long talked about creativity;  I remember it being one of the 4 C’s although there are now more than 4 C’s.    The issue has, in my experience, been the difficulty in convincing students as to their creativity.   They might have idea but find difficulty translating these into the reality of a piece of written text, an image or graphic, an animation, video or some other output.    Through AI tools the power of creativity is now easily in every students hands.    Not sure of what to include in a script, ChatGPT can help.   Need an image but am not that good with artwork, then Midjourney can help.   And the same for video content, audio, music, programming code, programming code and many other areas.    Through the use of AI tools every student can exercise the creativity of their imagination.    As I heard Dan Fitzpatrick describe it, AI will “democratise creativity”.

Questioning

I think one area which AI will help us build in relation to education is the art of questioning.   Generative AI by its very nature requires questioning or prompting.   AI outputs allow for the creation of realistic images, audio or video, which this therefore requires us to more often question what we read, see or hear, especially when accessed via social media and the internet.    I note that our conventional media is equally guilty of simplistic reporting and presenting a biases picture, it is just the social media does it with a greater volume of content and 24/7 unlike the scheduled national news broadcasts and daily newspaper prints.   Questioning, being inquisitive and constructively critical, debating and navigating complex and confusing problems will become increasingly important therefore schools will need to spend more time working on this with their students.

What I havent mentioned

There are a couple of AI benefits which I havent mentioned above largely because they are already here and have been for some time, however in the interest of completeness I will mention briefly.  Firstly, tools to help students where English is not the primary language.   There are already tools to help with translation of text, such as Google Translate, and also with translation of spoken content such as displaying subtitles in a students native language as a teachers works through a PowerPoint slide deck.   

Another area is grading and marking;  We have long had tools to allow the automatic marking of multi-choice tests however increasingly we have seen the ability to mark written responses against a marking criteria or rubric.   This will only continue with further opportunities for AI based automation being identified on an ongoing basis to help teachers and students.

Conclusion

There are likely many more ways AI will impact on education especially if you start to look beyond the 5 to 10 year mark and consider more general AI as opposed to the narrower AI and generative AI we have now.   At this point I don’t feel confident enough to even propose what education might look like at this point as it may be indistinguishable from what we see as education today.   That said education also tends to be slow to change, and any significant change would require everyone to get onboard, students, parents, teachers, schools, government, inspection regimes, exam boards, employers and may more.   As such I suspect there may be an amount of “kicking and screaming” as educational change comes to pass.   With two such significant catalysts for such change in the pandemic and now the sudden ease of access to generative AI, I feel that change is all but inevitable.

There are some pitfalls and challenges in relation to AI and education however I will pick these up in my next post.    For now though lets conclude on the fact that AI is here now, and will only get better.   We, those in schools, need to shape the solutions and its use while campaigning for appropriate guidance, frameworks and regulation of what, how and when AI should be used in schools.   We cannot however wait for the regulation to occur, as by the time it does the technology will already have moved on.   

Ethical AI

I have seen lots of posts talking about the “ethical” use of AI, and I have shared a few short posts on it myself, however feel it is about time for a slightly longer post having had a bit longer to think about the subject.

Ethics

Firstly lets just take a definition for ethical:

“Ethical comes from the Greek ethos “moral character” and describes a person or behavior as right in the moral sense – truthful, fair, and honest. Sometimes the word is used for people who follow the moral standards of their profession”

Vocabulary.com (as at 14/07/2023)

So for something to be ethical it needs to be truthful, fair and honest.   At first glance this all seems pretty simple in who would want things to be false, unfair or dishonest?   Well first thing to accept is that we will have some bad actors using AI who are not bothered in ethics, simply looking for money, to cause disruption, etc;  e.g. cyber criminals using AI tools for example.   But we will get to these people in a later post, just to keep things simpler for now.  Lets assume for a minute that everyone is wanting to be ethical and to behave appropriately.

Ethics according to who

This is where we hit our first big problem in my eyes.   Lets take truthfulness in the first instance;   In some domains this might be simple to achieve, identifying what is demonstrably true (e.g. basic maths) however in our ever more complex and nuanced world it is becoming increasingly difficult to show what is true.    I knew the truths that there were 9 planets in the solar system and that there was research underpinning learning styles (VAK) theory however in both cases these truths have since proven to be incorrect.   The truth changed.   And that’s before we get to politics, religion and football where everyone’s own truth may be slightly different, with everyone able to provide the evidence which, for them supports their own truth.    So what is the truth and who will be responsible for defining it?    And how will an AI scraping the internet for training data be able to tell the difference between yesterdays truth and todays new truth, my truth and your conflicting truth?    Fairness and dishonesty suffer similar issues.

But this isnt new

The above isnt however limited to technology and AI, it is a societal issue.   When having a discussion, when presenting or sharing ideas, purchasing and not stealing, we need to make decisions based on what is ethical, right, truthful, fair and honest, and where there might be differing views on what is right, truthful and fair.    How do we deal with this in our day to day lives?   Surely we must deal with it as otherwise life would grind to a hal.

The answer, in my opinion, is that we have laws that govern us plus we also have our own moral compass.   We have defamation, equality and discrimination law for example.    But we also have our own moral compass built from our experience and upbringing.   My mother often stated to “do unto others as you expect other to do unto you” for example. For most this seems to work.

A world of AI and ethics

In a world of generative AI we still have the same laws regarding defamation, etc, so this seems to provide the same guidance in relation to ethics as provided to our wider existence in society, and we also have the same old inner compass.   I, as a user of AI, should take care that content I share is truthful, fair and honest.      The vendor providing the AI tool should do what they can to ensure that their tool is as truthful, fair and honest as possible, but where we need to accept this is difficult given changing truth and differing views on fairness and honesty;  So in the face of these difficulties we fall back on the laws and our inner ethical compass.  

I suspect the key issues here are in relation to vendors of AI solutions and how they make sure their platforms are fair, truthful and honest in the face of use across national borders with differing beliefs on fairness and with differing laws, and in the face of content being posted online where there may be accidental or purposeful malicious intent.   But this challenge is no different than the very same challenge faced with social media.

The second challenge is the interface between AI and the user.    If a user uses an AI solution to create an image and submits it to a competition, where the image is identified to have clear commonalities with a piece of copyrighted work, who is responsible for the copyright infringement?   Is it the AI vendor for including the content in their training data or is it the user for presenting the content as original?    I suspect I will post a bit further on this discussion shortly, however I previously talked briefly about it in my article in EdExec magazine here.

Conclusion

The above two issues are the main issues I see in relation to AI and ethics;  Other than this I am not sure what it is we are discussing.   And even here is it about ethics at all, or simply about establishing who is responsible for content coming out of generative AI solutions.   How responsible are the owners of the AI solution and what responsibility does an AI user have for the content they create and use?   

Ethics sounds like a good discussion point:  Lets talk ethics and AI.    But does it mean much?   Its too broad, and much the same as the discussion of AI is a little on the broad side.    If discussing ethics and AI, why not discuss technology and ethics or even just ethics itself?

I think we need to stop talking about ethics in general, in relation to AI, and get to the specifics of our concerns.   Is it concerns for bias, error, mis-use, etc?    And in what way is this concerning?     Is it that bias in training data sets will lead to homogenised responses which therefore would be unfair and discriminatory against minority groups?   And if this is the issue how is the addressed through the framework of current discrimination law and what shortfalls exist in a world of generative AI?

Lets drop the “ethics” soundbite in our discussion of AI; Of course we want use to be fair, truthful and honest, much as we want everything else in life to be fair, truthful and honest, but what specifically does this mean and how can we adjust the current frameworks in relation to fairness, truthfulness and honesty, to encompass AI.

A [broad] review of 2022/23

And so, the 2022/23 academic year draws to a close so I thought I would share some brief thoughts and reflections on the completion of yet another year.   This is the first of two posts I will share in terms of reflection, this one focussing on broad reflections where the future post will focus much more on some of my more personal reflections.   

So what are my main takeaways from the last academic year?

Technology

This year I managed to get around and visit a couple of different subjects and see how others were using technology in their lessons.   It was great to see how technology was embedded and almost transparent in its use, but also heartening to see where teachers were deciding to use non-tech solutions in their lessons where this better met the needs of the students and the lesson content.   Technology is a brilliant tool but equally we need to reach a balance in its use and be willing to use or not use technology as appropriate.   My sense post pandemic was that there was a real danger of a rubber band effect on technology use, seeing technology usage quickly regress to pre-pandemic levels however this does not quite seem to be the case.    That being said, looking outside of schools to exam boards and other centralised education functions, they have moved very little and I suspect this will be the most significant challenge for education;   Schools themselves are able to move forward and progress in the use of technology however for education to move as a whole will require bodies such as exam boards, government education departments, inspections regimes, etc to move forward with their adoption of technology.   It will also require them to ensure that their staff, including those who visit schools to carry out moderation, to carry out inspection or to provide support or consultancy, all have a reasonable level of technology skills.  My experience to date suggests both the technology adoption and the technology skills are currently lacking.

Cyber

Linked to the above I have seen exam bodies providing software for use in educational establishments where the software required local admin privileges.   I have also seen hardware sold where the operating software provided only supported an outdated version of a networking protocol, rather than the newer more security version.  No update was available with the only solution the vendor could suggest being to purchase their newer, more advanced and unsurprisingly more expensive hardware option.   In order for schools to be better protected against the increasing cyber threats of the world we live in, we need to ensure we do the basics which includes limiting the permissions provided and using “least privilege” as a standard, as well as ensuring updates are available and that the newest protocols and standards are used.   Those organisations and companies providing software and hardware to schools need to ensure that cyber security is baked into their solutions by design and where it is not these solutions should simply be prevented from use in our schools and colleges.   Until we address the issue of EdTech software and hardware being designed with security in mind, both in terms of the current issues but also in terms of future risks and issues through the anticipated lifespan of the solution, education will continue to be an easy target.

Generative AI

Generative AI has really hit the news particularly in the last 6 months.  I have already written a fair amount on Generative AI however my main takeaway from the year is that generative AI is here and will only get better.  As has been said a few times, the current AI solutions available are the worst they will ever be.    We therefore need to shape its use by experimenting and identifying how it can help teachers, students and the wider school community.  Equally we must ensure that those using AI understand the risks and implications of its use.    This can be done in a pragmatic way focussed on the present, however we must also look to the future and how AI might significantly change the world of education.   Will teaching and learning look the same as it does in 5 or 10 years time?   What about assessment and exams?   Will changes finally allow greater time to work with students on digital citizenship along with health and wellbeing in a digital world?    There are lots of questions we can now seek to ask as we seek to explore the art of the possible in a world where Generative AI is now available.    These are interesting times.

The negative world

Reflecting back on the academic year and on the wider world there has been a lot of press in relation to the things which aren’t working as we wish they would, some of which having incalculable impact on those involved including leading to loss of life.    The cost of living crisis,  war in Ukraine, fuel cost crisis and many other negative events have flooded our TV news and our social media.   My concern here is that these negative events might blot out any of the good that may have been achieved.   The availability bias might come into play as all that comes easily to mind is negative, leading to increasing issues with mental health.    I worry that the news, including TV, social media, etc, not only reports event but also shapes future events and if this is the case, and the current news is more often negative than positive, what might the net result be for the future?    How do we achieve the balance in the world, in our countries, our towns, families and in our own lives where the prevailing message is that things are getting worse?   It was however heartening to hear Ty Goddard speaking at the LGfL conference talk of the need for hope, faith and leadership.    How do we lead people, including staff and students through this period where things feel so bleak, and how do we seek to cultivate the hope and faith that may be so important going forward?

Conclusion

The summer period is an opportunity for many to reset and recharge.  For myself and my team, it is an opportunity to get lots of the technology related upgrades, updates and other development work done while things are a little quieter;  I note we have summer holiday courses here throughout summer so things are never fully quiet.    It is also a key milestone and opportunity to reflect and also to look forwards and plan for the next academic year.     As I think about the implications of AI, I think the end of the academic year also represents an opportunity for us to look inwards;   Looking back and looking forwards in time is useful, but sometimes we need to be introspective and look at ourselves as humans, as individuals, considering health, wellbeing, resilience, etc.    We are more than a list of achievements, struggles or a list of plans and targets.

For me the year has had its ongoing challenges such as cyber risk and the generally negative context of the world, with these being an ongoing grind.  It has also had its positives in seeing the work of my team, in supporting technology use, being realised in classrooms with teachers confidently using technology and with the technology being almost transparent in its use.    And the year has also seen AI gain prominence, providing such potential or opportunities for the future.    It has been another busy, challenging but also rewarding year.  

So 2022/23 has ended.  Now we wait and prepare for 2023/24!

If… The ISA/ISC Conference

I was involved in the ISA/ISC Digital Conference a couple of weeks back and thought I would share some of my headline thoughts following the event.    I note it was a busy day, including actually being involved in leading a couple of the sessions, so the below represent the standout points for me based on the sessions I managed to see.

For me there were three main themes which stuck out for me:

  1. Challenge the status quo and overly simplistic language and imagery
  2. Awareness of reductive reasoning and binary arguments
  3. A focus on humanity and wellbeing in a world of increasing technology use.

Challenge the status quo.

There is a clear need to challenge the status quo and particularly some of the language and visuals used when discussing technology.     Laura Knight did a good job of drawing attention to this, and I will admit I have been as guilty as many others of using some of the loose, simplistic language and visuals to which she referred.   The picture of the smiling student wearing a VR headset, the use of The Matrix style graphics in relation to AI ( Note: definitely guilty of this! ) and the discussion of the printing press as an analogy of the change now presented by AI, among others. All simplistic and not really representative of the situation we now find ourselves in due to AI.    These simplistic images and equally simplistic language only go to strengthen false perceptions about technology.   One of my favourites is the use of the dark hooded, and normally male, figure when discussing cyber crime;  Not exactly representative of the organised cyber crime we see today. We need to do better.

Reductive reasoning and binary arguments       

I have long argued about binary arguments and how the world is seldom simple enough to be modelled with a black and white, good and bad, right and wrong argument.  The world is inherently messy and operates in many shares of grey, existing between the black and white of any two extreme positions.  The issue here is that extreme positions are suited to the world of social media where the content has to be short and anything which stimulates a response is a good thing.    The message and medium are entangled.   This is something we need to be aware of especially as it continues to encourage echo chambers and division rather than the critical discussion and reasoning we really need. 

As to reductive reasoning, I get that we often want to simplify things for people.   This might be through presenting a simple model or presenting a “50 ways” or “5 ways”, such as 50 prompts for use of AI or 5 basic cyber security basics.   And again, I am guilty of this, creating a framework of basics in relation to cyber security.   But again the world is seldom that simple, and although the model or list of ways, makes things easier for the reader or audience, it all too often over simplifies the issue being discussed.   Cyber security is more nuanced than 5 basic mitigation measures and AI prompt craft is way more nuanced than 50 prompts.   The challenge here is the balance between convenience and ease of use, and mirroring the complexity of the world we live in.   Too basic makes things seem too easy and therefore not representative of our world, where trying to model the real world will likely result in a model too complex for people to understand or to be useful.   The balance lies somewhere in between these points.

Wellbeing and Humanity

As technology plays an increasing part in our lives then it is likely that wellbeing will become more important.   Being human and the traits of being human will become more important.   We will need to consider how we support wellbeing in the face of the tsunami of digital content, both positive and negative;   how we will manage the uncertainty of cyber risk, how we will best use the AI solutions which could provide support and counselling however might also be designed to influence, manipulate and deceive.   The question of what it means to be human and how human intelligences differ from synthetic intelligences will become all the more important.    We need to make use of AI to do the tasks which it is good at, while identifying what it is that human intelligence is better at and is suited for.   Ethical use of AI also plays a part here as will we want to know when we are dealing with an AI rather than a human, and are we happy for AI to play a part in key life decisions such as those related to health and finance.   In the face of upsetting online content, are we happy with AI making the decisions to filter content on the grounds of wellbeing, where we know this might lead to bias in the content which we are provided, or where such a solution might be controlled by an outside agency to their own ends?

Conclusion

The conferences title was “if” and this is rather apt as I feel we find ourselves at a junction.   We had the catalyst which was the pandemic that propelled us forward in relation to technology use in schools, plus we are now presented with AI, potentially an even more significant catalyst.   What might the future hold?     The key in my view is that we need to consider this, we need to consider how the next 5 or more years might look, and how we might shape this future to be the positive outcome we would like, or even need, it to be.    How things might look in future will be directly influenced by the decisions we take now.   If we do X then this may lead to Y.    We need to grab hold of technology and seek to shape its use and through doing so shape our futures.

What does the future of cyber look like for schools?

The question of this post is not an easy question to answer.   On one hand, if I show an optimistic viewpoint, I may be seen as downplaying the issues and the challenges which impact schools.  On the other hand, if I am pessimistic, I may be seen as portraying a no-win scenario, a scenario so bleak that it doesn’t really bare thinking about.   So, I am going to do my best to thread the needle of this challenge and strike a balance between unrealistic optimism and nihilistic pessimism.

Increasing technology use

Schools are only going to make use of more and more technology as we seek to try and do more with less.   We seek efficiencies, we seek to solve a workload challenge, we seek to continually improve, and in all of this we will continue to make use of more and more technology.   And as we use more technology our technology footprint, our data footprint, the number of integrations and systems used, and our overall risk as related to technology use will only increase.   I find it difficult to see any other option.    My risk when I was younger and I used a standalone PC without internet connection, using a limited number of bits of software is less than today where I use multiple laptops and desktops, a mobile phone, home assistant, smart TV and other devices, complete with way more applications.   The direction of travel is undeniable.

Increasing ambient cyber risk

Additionally, the ambient risk of cyber incidents, whether the result of nation states, either directly or more commonly indirectly, whether due to the script kiddies in our schools or, and much more likely, the result of cyber criminal efforts to generate profit, the ambient risk will only continue to grow.  I have attended industry cyber conferences in consecutive years and this has been the message for a number of years, with this again likely to only continue.   And where there is an increasing technology use and the potential for criminal gains, which therefore are increasing over time it should be unsurprising that criminals will seek to grow and develop their technology focussed attacks, and therefore the general risk continues to grow.     Regulation and legislation helps little here as technology operates across national borders, so laws and penalties for mis-use just see criminal enterprises moving their efforts, resources or even themselves to nations which are more accepting of their activities or maybe even where they turn a blind eye.   This is also paired with the increasing focus on individual privacy in technology solutions even where this privacy is also applied to criminals such as those engaged in sharing child sexual abuse material.  Sadly, communications technology is easier secure or not, it cant be secure for some but not for others.

It’s all doom and gloom?

So, what are the positives in this story?   What balances out this negative picture?  It would be easy, at this point, to see only the negative, to feel hopeless in the face of ever-growing risk and ever-growing compliance requirements.      But we need to identify the benefits of the technology, the connectedness, convenience, benefits to creativity and problem solving, etc.    Today’s technology allows me to do way more than I was capable of with my standalone DX2 66Mhz PC from years gone past.   I can communicate further and faster, can create content which is more details, complex and creative, solve problems quicker and much more.

Maybe this is the issue, that when discussing cyber we focus too much on the negatives and take our eyes off the positives.   This can be very depressing indeed.    But, technology supports, encourages and enables so much of what we can now do and as with most things in life there is a balance to be struck.  Sadly, the counterbalance in this case is the cyber risk that is created.    Considering balance, we could easily seek to reduce the risk simply by using less technology but is this something we are really going to do?

So, what can we “reasonably” do?

This is the crux of the matter in how we can manage the risk, assuming that using less technology isnt an option.   The answer to this, for me, is to do the basic cyber security tasks like patching, creating and testing backups, managing and limiting user permissions, managing and limiting the data you store and how long you retain it for and developing user awareness regarding the risks.  There may be a need to prioritise here as schools may not have the resources to patch every server and every device however rather than focussing on the ideal and on what we haven’t or cannot do, we need to focus on what we have done;  Each additional device or server patched is one less vulnerable device and therefore a net reduction in the overall risk.   Every step, no matter how small, is a positive step.

It is also important to acknowledge that no matter what you do you will still suffer a cyber incident at some point in the future, so you need to prepare.   Key to this can be running a desktop exercise to check for assumptions or issues in your response plan plus to build up familiarity with the plan.   This should not be an IT only exercise as a cyber event equally is not an IT only event, it impacts the whole school.   As such stakeholder from across the school, leadership, teaching, IT should be all involved in the exercise and contributing their thoughts and ideas.    The desktop exercise is a useful tool and far less invasive than going around unplugged servers to see what people do!

Conclusion

So back to my initial question, what does the future of cyber look like for schools?    I think we will be continuing to do more and more with technology tools, being more creative, efficient, and interconnected, but this will sadly be balanced with an increasing cyber risk.   But it is a balance, and I think that is my answer, the future of cyber for schools looks like maintaining a balance.   In terms of managing this balance it will continue to be about doing all we reasonably can based on the resources we have, focusing on continually reviewing our cyber security posture and approach and making the continual little steps to reduce, or at least manage the risk.

It’s not a bleak, or an overly positive picture, but I think the above is a realistic and pragmatic picture!

Note: I avoided the overly simplistic picture of a person in a hoodie as my cyber criminal in this post;   As was pointed out to me recently, this stereotypical view, and lazy analogy is seldom helpful including in our discussions of cyber security or cyber crime!

AI and report writing

Workload is a growing concern for teachers in schools and therefore it is important that we seek solutions, with one of these solutions potentially being the use of AI.   One area where AI might help is in the writing of the reports sent to parents.   These reports which are often sent on a termly or even half termly basis can take significant time to write, and even more so where a teacher may has a large number of classes.   Now, before I go any further, lets be clear that what I am talking about is the use of AI to help teachers write the reports and not the use of AI to fully write the reports.   AI is good at some things such as consistency, objectivity, basic writing, however it lacks the humanistic side of things regarding relationships, perceived effort, motivation, etc, which a teacher brings to the mix.   As with a lot of applications of AI, I think the best can be had where it is AI combined with a human, maximising the strengths of each.

Feeding AI data

The key for AI report content is the data you provide along with the prompts directed at the Large Language Model.    From a data point of view we might simply seek to lift basic data already gathered and stored in the schools Management Information System (MIS).  This might include a score for effort, for homework, for behaviour, etc, plus a target and a current grade where this information is currently already gathered.    In my school we have experimented with this however the results feel a little bland given the relatively limited number of different permutations of the grades, plus the limited number of grade options.   To achieve more “personal” and individual reports requires more data however we need to balance this out with the resultant workload it might generate in terms of teachers having to gather and enter this data.

The approach used by www.teachmateai.com seems to provide a suggestion here in that its report generating solution asks teachers to input strengths and weaknesses.    Here the number of permutations jumps significantly as the options which are entered are only limited by a teachers imagination as to what constitutes a strength or a weakness.    Equally the data entry overhead needn’t be that significant.    I think back to teaching Btec qualifications some years ago and charting the achievement of the various grade descriptors so the students could see their progress and the areas they still need to work on.    A teacher could simply take this data or other data regarding the themes and topics covered, and enter this as the strengths and weaknesses, along with a couple of more individual comments per student and the resultant reports would appear reasonably personal to each student.

Data Protection

The DfE identified the risk associated with the creators of AI solutions sucking up huge amounts of data so data protection is something we need to consider in this process.   The DfEs own Generative AI in education (March 2023) guidance for example states:

“Generative AI stores and learns from data inputted. To ensure privacy of individuals, personal and sensitive data should not be entered into generative AI tools. Any data entered should not be identifiable and should be considered released to the internet”

So how do we generate student reports without entering personal data?    I think the key here is in ensuring the data provided isnt linked to an identifiable individual.   This aligns with GDPR where personal data relates to an identifiable living individual.   So if we anonymise the data, say by removing the name of the student before providing data to an AI, then we have reduced the risk given the actual student is not identifiable.   We can then add the correct name when we receive the response, the report, from the AI, with the full report then including the correct name.     This for me feels like the best approach however alternatively it would be possible to argue that providing a first name only, where first names would be often repeated, may also mean that the students are not individually identifiable and hence any risk is mitigated.   Either way it is for schools to consider the risk and make their decision accordingly, making sure to document this.

Example

I suppose the key where AI is helping with parental reports is, do they read well enough to be acceptable to parents so to that end I would like to provide an example based on data for a fictious student:

Sam demonstrates a solid performance in his History class. In lessons, he displays reasonably good engagement, and consistently produces work of a satisfactory quality for his grade range. Sam is thorough in completing his tasks and has great ideas. However, he is reluctant to get involved in some activities, which limits the extent of his engagement.

Would this pass your schools standards?    And remember it would be expected that the above would be read and adjusted by the relevant class teacher before going out.

Conclusion

For me, the use of AI to help with parental report writing seems like an easy win.   If it reduces the amount of time of required by teachers to create reports therefore allowing teachers to focus on other things, while still providing an appropriate and informative report for parents, then this is a good thing.