AI, bias and education

Lots has been written about the risks and challenges in relation to artificial intelligence solutions including the risk of bias.    There hasn’t been so much written that specially explores these risk in relation to the use of artificial intelligence solutions within education.    As such I would like to share some thoughts on this starting specifically with the risk of bias and how this might impact on education, teachers and students.

Bias in AI systems

AI systems will generally be provided with training data which is then used by the system in generating its output.    The quality of this training data therefore has a significant impact on the usefulness of the resulting AI solution.    If we provide the system with biased training data, such as an unrepresentative amount of training data relating to a specific event, group or other category, this will result in a biased output.    An easy example of this relates to the poor ability for AI based facial recognition systems to identify people of colour.   This likely relates to the fact these solutions were created by largely western white individuals who therefore used training data which had an unrepresentative number of western white faces.     The challenge however is that humans tend to be biased, albeit often subconsciously, so therefore there it is almost guaranteed that some bias will be intrinsic in the training data provided, but that this bias may be difficult for us to identify.

So what might the impact be in relation to education?

Recommendation Systems

One of the areas where AI has been used for some time is in recommendation systems such as Google Search or the “you might like” on shopping sites like Amazon.   We will likely see similar systems in education which will recommend subjects or topics for students to study or may even recommend future study paths from secondary into FE and then onwards into HE.    But what if these solutions include bias?  I would suspect a gender bias would be the most likely to occur in the first instance, as the AI solution tries to mirror the real world training data it will have been provided, where the real world itself still continues to be biased, advantaging males over females.    This would also cause a significant problem in relation to how AI systems might respond to individuals which identify as non-binary given there would be little training data relating to non-binary individuals.   What suggestions would it provide when the vast majority of data it has related to males or females only?

Learning Systems

Expanding on recommendation systems, we also will have learning systems which gather data on students as they interact with learning material, providing real time feedback and support, plus guiding students through learning materials specifically selected to meet the needs of the individual student.   It will not be obvious how these systems arrive at their output however this output might include selecting content based on its difficulty or challenge level, or providing support and advice based on the identified needs.   What if there is bias in the training data which leads the AI to tend towards providing overly difficult or overly easy content to a specific subset of users?   Note, this subset of users could be as simple as a gender, users in a specific location or ethnicity, however more likely will be a complex categorisation that we may not fully understand.    The key issue here is that some students would be receiving more or less challenging learning content, or more or less support or advice as a result of biased decision making within the artificial intelligence solution.     How might this impact on students, their learning and their achievement?

Academic stagnation

Again, building on the above, we need to recognise that AI solutions are probability based.   They use the training data provided and then use probability based decision making to identify their outputs and actions.   This use of probability means that output and decisions tend towards the average and the statistically most likely.    In terms of education this might mean that AI solutions will equally tend to reinforce the average so students in a school where previous students have done historically below national average may be supported by AI solutions to achieve similar results, the historical average for the school, even where the individual student ability or even the ability of a given year group is above this national average.    Looked at broadly across all education the world over, AI used in teaching and learning, may tend to focus on a global average, which may disadvantage those who are capable of more than this.   It may lead towards more equitable access to education, but it may also lead to a stagnation as all educational efforts tend towards an average.

Divergence

We touched briefly on this earlier, but it also relates to stagnation and a tendency towards the average.   AI solutions are provided training data and make decisions based on this, so there is a tendency towards an average but what if students diverge from this average?    The lack of data specifically relating to these individuals will mean the AI will tend towards the probable and providing advice or directly students according to how the “average” student might perform, which may be inappropriate for these divergent students.    Consider an AI based learning platforming selecting content and providing advice based on the “average” student but where the student using the system is neuro-divergent?   Is the content and advice likely to be appropriate for these students?   What might the impact on the student, on learning, on their mental health, where being presented with inappropriate learning path ways, support and advice?

Reinforcing Bias

Where AI solutions are generating the learning content themselves based on individual students needs we also need to be conscious of how this might result in the reinforcement of stereotypes and bias.   What if the AI solution has to create an image for a criminal, a nurse, or childminder or lawyer;  Is there the potential for the images the AI presents to reinforce gender, ethnic or other biases which already exist, and therefore which are highly likely to exist in the training data?

Conclusion

Based on the above it is clearly right to consider the above risks.   We need to be conscious of these risks such that we can try to mitigate against them by carefully reviewing the training data being used, and by ongoing review of AI performance.   We also need to consider where in some circumstances it may be necessary to have separate AI solutions, with separate training data, for use in certain situation.    Although these risks need to be considered we also need to remember that in the absence of AI solutions in education, it has been humans which have made these decisions.    And humans aren’t devoid of bias, we just happen to largely be unconscious to it.   It is easier to identify bias, or other incorrect or irrational behaviours in others, including in AI systems, than it is to identify it in ourselves.   We therefore need to be careful to avoid holding AI up to standards that we ourselves have never been able to meet.

I wonder whether in seeking to address bias in AI solutions the first thing we may need to do is step back and acknowledge the extent of our own human bias both individually and collectively.

Defining AI

This week I want to continue the discussion of Artificial Intelligence, posing the difficult question of what AI, by definition, actually is.  

The artificial element of artificial intelligence is reasonably clear in that the intelligence is artificially rather than biologically created.   Programmers were involved in developing software code thereby creating the Artificial Intelligence solution.   AI doesn’t arise out of biological evolutionary processes, although it might be possible to suggest that the ongoing development of AI solutions might be evolutionary.

But what about “intelligence”?  

What is intelligence?    There are differing definitions of intelligence.   A google search yields a definition from Oxford Languages which refers to “the ability to acquire and apply knowledge and skills”.    It would appear clear that an AI solution can acquire knowledge in the form of the data it ingests and in the statistical processing which allows it to infer new knowledge.   We have also seen robotic AI solutions which have learned physical skills like the ability to walk.   So, from this definition it appears that these solutions may show intelligence.   That said, does the AI comprehend the meaning of the text it outputs in response to a prompt?   Does it feel a sense of success, and are feelings and emotions a part of intelligence?    And does it “acquire” this knowledge or is it simply fed it by its designers and users?  Does it also choose what to acquire and what outcomes it wants or does it just do only as its programmed?

Evolutionary intelligence?

Another definition for intelligence, which has a more evolutionary bias, states that “Intelligence can be defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor”.   This links to Darwinism and the survival of the fittest in the benefit towards the actor.    It may be that current AI solutions can solve complex problems, such as identification of patterns and anomalies in huge data sets, however it is also possible to evidence where AI solutions fail at simple tasks we humans find easy, such as object recognition and spatial awareness.    As to the actions of the AI benefiting the actor, if we assume the actor is the AI itself, I am not sure we can evidence this.   How does the AI benefit from completing the task it is set to?    I suppose we could argue that the AI is completing a task for a user and that the user is the actor receiving benefit, or we could suggest that by benefiting the user, the AI as actor is more likely to continue to see use and develop which could be considered an act of self-preservation.   But is the AI conscious of benefit?   Does it even need to be conscious of benefit?  Is it conscious of a need for self-preservation?   But then again are we humans conscious of our own need for self-preservation or the personal gains which may motivate us towards seeming selfless acts?

Mimicry

The issue here for me is that I am not sure we are clear on what we mean by artificial intelligence in that the term intelligence is unclear and may mean different things to different people.    I suspect the term AI is adopted in that AI solutions are able to mimic average human behaviours, such as being able to respond to an email based on its content, being able to analyse data and suggest findings or being able to create a piece of artwork based on the work of others.   We just substitute “mimic some human behaviours” for “intelligence”.   In each case the AI solution may be quicker than we humans or may produce better outputs, based on the averaging of all the training data an AI has been exposed to.  In each case, and due to the training data, the outputs may be subject to inaccuracy and bias;   And maybe this may support the use of the term intelligence in the inaccuracy and bias we display as humans being so clearly mimicked by the AI we create.

A task focus

Looking at the definition of “artificial intelligence” in its entirety, Oxford Reference refers to “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.   This definition, of tasks normally requiring humans seems to fit given without AI it is for humans to respond to emails, create artwork, etc.   So maybe AI is simply a system which can mimic humans in terms of its ability to complete a given task and produce a favourable output.

Conclusion

I think it is important we acknowledge the vagueness of AI as a term.  But then again AI is simply a subset of different types of intelligence including biologically developed as well as human created Intelligence.    And if we struggle in creating a consistently adopted definition for intelligence, it is therefore of little surprise that our definition of AI is no less vague.   But maybe this is all semantics and the focus should simply be on developing solutions which can carry out tasks previously limited to humans, and by extension human intelligence.  

Considering human intelligence one last time, we need to remember that a child may show intelligence in speaking its first words, or learning to stand, meanwhile an adult explaining chaos theory or performing an orchestral piece will also be showing intelligence.    That’s a fairly large range of intelligences.    And it is likely with AI, the range of intelligences will be equally broad with our current AI solutions, including generative AI, being near the infant side of the continuum.

Before finishing I also need to raise the challenge in relation to mimicry of human efforts to complete tasks, where AI may mimic our behaviours all to well.  It shows bias, a lot like humans do.   It also states with confidence facts that are either untrue or have limited supporting evidence, much like humans do.   It is subject to exterior influence though its inputs and training data, again much like humans, and it creates “original” works based on the works of others but without a clear ability to reference all that which it has learned and based its outputs on, again exactly like we humans.   This all represents a challenge where I see people trying to hold AI solutions to a standard that we humans would find difficult or even impossible to achieve.

For now, I think we need to accept the vague definition of AI and for me this is a system which can complete tasks which would normally require some form of human intelligence, where inherently this system also tends to mimic some of the drawbacks of the human intelligence it seeks to copy.  Its not perfect but it will do for now.

References:

https://www.google.com/search?q=definition+intelligence

Artificial intelligence – Oxford Reference

Q&A – What Is Intelligence? (hopkinsmedicine.org)

What does the future for schools and AI look like (The risks and challenges)?

My last post looked at the future of schools now we have widespread use of generative AI, taking a generally positive viewpoint.   This post will now reverse this looking at the challenges and risks and taking a more pessimistic stance on AI in education.

Personalised Learning, for those with access

AI does have great potential in schools and in education however I feel it will highlight the already existing digital divide.  Not all students have access to a device with which to use AI tools.   Not all schools allow generative AI such as ChatGPT to be accessed plus schools have varying degrees of IT infrastructure and support.    Additionally, some schools will be more forward looking and will already be talking to staff and students about AI and generative AI, while others will have yet to have broached or even considered the subject.     As such the ability to access, to understand and to use AI positively will be varied and not uniform for all individuals.    AI might therefore serve to widen the digital technology gap which already exists, with those with access to technology, infrastructure and support benefitting from the personalised learning of AI, while those without languish further and further behind.

Lacking diversity

We also need to consider how AI works and the training data it has been provided.    Much of the AI development will have happened in the western world, where technology staff still are more often English speaking, male and white.    This creates a bias in the resulting AI solutions with this bias being widely reported.   Fundamentally our current generative AI uses its training data to generate the output it creates, with this training data largely coming from the internet itself and with the process based on statistical modelling.  This results in AI outputs which tend towards an average or statistically probable output based on the available training inputs.   How does this then impact on those who may stray from this statistically average person or response?    A perfect example of this lies in an email I saw shared on social media (see on twitter here) where an individual responded indicating they would prefer a human rather than an AI generated response.   It turns out the email was human generated and the original sender proceeded to indicate “I’m just Autistic”. 

What might the broader impact of AI, trained to output based on an average person or response, be on those who are neuro-diverse and therefore differ from the average or norm?     I will admit this issue isnt new and can be seen in the Google search engines AI based responses.   It tends to towards the common answers, those that in probability are most likely to be those favoured by users in general.   The less common or more divergent answers, opinions and views are buried further down the search responses, likely on later pages where generally few people ever go.   But remember your search engine provides a list of links to look at, whereas generative AI tends to provide but a single response to consider in the first instance.  So the issue may be more evident where generative AI is in use.

A collapsing model / Homogeneity

Another challenge in relation to AI is its need for more and more training data to improve.   The issue will be that generative AI will become increasingly responsible for the content being published online which in turn is commonly used for the training of AI solutions.   We will have a situation where AI solutions generate content, which is then ingested as training material by AI leading to more content.   And as AI starts to learn from itself, and in relation to generative AI’s tendency to move towards an average response, the AI models may weaken and fail.   It’s a bit like genetics and having a limited gene pool leading to limited diversity and lack of ability to adapt to environmental change.     This in turn could proceed to only deepen the issue of AI solutions lack diversity in their ability to support a diverse user base.  

Blackbox AI

The black box nature of AI solutions is yet another issue which could be considered a risk.   AI solutions are generally black box solutions so we cannot see how they arrive at their final output.   This means that we may be blind to bias, inaccuracies, etc which exist within an AI solution as the result of the training data is has had access to.   AI platforms may be constantly reviewing student activity, and from this may categorise students in ways we don’t understand, then responding with learning content influenced by bias intrinsic to the training data.    From the point of view of schools, where a students learning, future learning and future lives are at stake it represents a concern and risk if we are unable to understand why a particular student was provided a certain learning path over other options.     What if the AI, through bias, identifies a student as lower ability and therefore proceeds to offer low challenge content?

Convenience and laziness

As a benefit, AI solutions like ChatGPT can make things easier as we can easily get a generative AI solution to draft an email or a document;  It is simply more convenient, faster and requires less effort on our part, but the risk is that we become lazy through this.   There is already a bit of a panic about students simply using generative AI to create their coursework and homework for them.   We may also become overly reliant on these solutions for our answers and less able to think for ourselves, we may also become less likely to stop and question the responses we receive.   And guess what, this isnt new either.   We already see this in social media where I recently saw a post based on an article which referenced a piece of research.   On social media some individuals jumped on the content of the article and what it said about the findings of the research, but upon further inspection the research made no such findings.   Convivence in accepting the summary of the findings, from the article, had overtaken proper checking of the source material to check the summary was correct.     And with AI solutions becoming more common, and even supporting the creation of video, we likely need to be more questioning now than we have ever been in the past.    But maybe there is an opportunity here if the convivence frees up time, which is then used to allow us to be more questioning and critical;   I suspect this is me being hopeful.

Data Protection

The DFE guidance states that schools should not be providing personal data to AI solutions.   This is due to the risk in relation to AI and data protection.    If the AI learns from its inputs, then it might be possible to provide prompts which therefore surface these inputs;  So if we entered student data such as exam results, it might be possible for someone else to write a prompt which would then result in the AI providing them with this data, even if they have no legal right to access this data.    There is also a similar risk, if the prompts and data we provide forms part of the overall training data, that a data breach of the AI vendor would result in the data being leaked to the dark web or being otherwise used by criminals.

We also need to consider the long term safety of our students.   If an AI vendor has large amounts of data on students is there potential for the vendor of this data to share or use the data in a way that is not in line with our expectations as educational establishments.   What if the data is sold to an advertising company to help better target students in relation to marketing campaigns, even providing individualised targeted adverts based on data gained as they worked through AI learning content?    What if the data is used by governments to target members of society that don’t fit their blueprint for the ideal citizen?    Am thinking about Orwell’s 1984 here, which may be a bit of a stretch, but if we are providing student data to AI solutions, or putting students in front of AI solutions we expect them to use, how are we making sure their data is protected.

Conclusion

I have tried to avoid the “beware AI will take over the world” and/or “AI will kill us all” message.   I have tried to focus on education and some of the things we need to consider in our schools.   The reality is that AI is here today and will only get better, that it has many potential advantages however there are also risks and concerns we need to be conscious of.   We cannot however be so worried that we sit, discuss and legislate, as by the time we have done this, AI solutions will have already moved on.  

For me, we need to engage with AI solutions in schools, seek to shape the positive use, while being aware and conscious of the risks and challenges that exist.

What does the future for schools and AI look like?

I have previously written about the future and cyber security for schools, so I thought it might be equally useful to consider Artificial Intelligence (AI) and schools and what the future might look like given we now have all of these generative AI tools available at our fingertips and the fingertips of our students.

Personalised Learning (for students and staff)

This for me is the key advantage in that an AI solution, gathering data on a students every interaction with online learning content, can then provide individualised feedback to that student.   The current classroom model of a teacher and a number of students, suggests that each students only get a fraction of the available teacher time no matter what strategies are employed.   But with an AI solution, each student would get the full attention of their own online AI based tutor.   Khan Academies Khanmigo gives a taste of what this might look like.   Now the likelihood is this will first impact on the core subjects such as Maths, English and Science, the subjects for which there are already a large number of learning platforms, with inbuilt content available, albeit without the AI personal tutor element.    After this I suspect we will see its growth into the other subject areas however at a slower rate.

And why should this personalised experience be limited to students.   Couldn’t it also provide personalised professional development content, curate research materials on pedagogy based on your interests, link you up online with colleagues in others schools for support and ideas, etc, providing regularly updated recommendations to help your professional development journey.   

A personalised learning experience may also free up some time within the curriculum, plus free up teachers, to focus on the things which AI is not able to address.    This might therefore allow for more discussion as to the impact of technology on our lives, looking at digital citizenship.   It might provide time to consider and discuss human characteristics and issues such as wellbeing, mental health, equality and diversity and resilience to name but a few.  

Personalised AI based learning will also enable real time feedback to parents, giving a much more detailed and regularly updated report on a students progress, their strengths and areas for development.    In turn this will help with teacher workload as this will reduce the need for the regular writing of reports which are sent home, thereby freeing teachers up to focus on teaching and learning.

Personalised Learning for students with special educational needs

Linked to the above is the potential for students to receive additional AI based support where they have special educational needs.    Through looking at the data associated with students interactions with a learning platform, an AI solution might be able to highlight possible learning needs at an earlier stage than a teacher may be able to do;   This is simply due to the Ais ability to focus on each individual student plus the wide variety of data it would have access to.    Upon identification an AI platform might then be able to appropriately provide advice, guidance and additional support in line with the students needs.    And this would be available to every student.    This is one significant advantage of AI within education, the ability for it to scale up personalised 1:1 learning content and support.

Creativity

We have long talked about creativity;  I remember it being one of the 4 C’s although there are now more than 4 C’s.    The issue has, in my experience, been the difficulty in convincing students as to their creativity.   They might have idea but find difficulty translating these into the reality of a piece of written text, an image or graphic, an animation, video or some other output.    Through AI tools the power of creativity is now easily in every students hands.    Not sure of what to include in a script, ChatGPT can help.   Need an image but am not that good with artwork, then Midjourney can help.   And the same for video content, audio, music, programming code, programming code and many other areas.    Through the use of AI tools every student can exercise the creativity of their imagination.    As I heard Dan Fitzpatrick describe it, AI will “democratise creativity”.

Questioning

I think one area which AI will help us build in relation to education is the art of questioning.   Generative AI by its very nature requires questioning or prompting.   AI outputs allow for the creation of realistic images, audio or video, which this therefore requires us to more often question what we read, see or hear, especially when accessed via social media and the internet.    I note that our conventional media is equally guilty of simplistic reporting and presenting a biases picture, it is just the social media does it with a greater volume of content and 24/7 unlike the scheduled national news broadcasts and daily newspaper prints.   Questioning, being inquisitive and constructively critical, debating and navigating complex and confusing problems will become increasingly important therefore schools will need to spend more time working on this with their students.

What I havent mentioned

There are a couple of AI benefits which I havent mentioned above largely because they are already here and have been for some time, however in the interest of completeness I will mention briefly.  Firstly, tools to help students where English is not the primary language.   There are already tools to help with translation of text, such as Google Translate, and also with translation of spoken content such as displaying subtitles in a students native language as a teachers works through a PowerPoint slide deck.   

Another area is grading and marking;  We have long had tools to allow the automatic marking of multi-choice tests however increasingly we have seen the ability to mark written responses against a marking criteria or rubric.   This will only continue with further opportunities for AI based automation being identified on an ongoing basis to help teachers and students.

Conclusion

There are likely many more ways AI will impact on education especially if you start to look beyond the 5 to 10 year mark and consider more general AI as opposed to the narrower AI and generative AI we have now.   At this point I don’t feel confident enough to even propose what education might look like at this point as it may be indistinguishable from what we see as education today.   That said education also tends to be slow to change, and any significant change would require everyone to get onboard, students, parents, teachers, schools, government, inspection regimes, exam boards, employers and may more.   As such I suspect there may be an amount of “kicking and screaming” as educational change comes to pass.   With two such significant catalysts for such change in the pandemic and now the sudden ease of access to generative AI, I feel that change is all but inevitable.

There are some pitfalls and challenges in relation to AI and education however I will pick these up in my next post.    For now though lets conclude on the fact that AI is here now, and will only get better.   We, those in schools, need to shape the solutions and its use while campaigning for appropriate guidance, frameworks and regulation of what, how and when AI should be used in schools.   We cannot however wait for the regulation to occur, as by the time it does the technology will already have moved on.   

AI and report writing

Workload is a growing concern for teachers in schools and therefore it is important that we seek solutions, with one of these solutions potentially being the use of AI.   One area where AI might help is in the writing of the reports sent to parents.   These reports which are often sent on a termly or even half termly basis can take significant time to write, and even more so where a teacher may has a large number of classes.   Now, before I go any further, lets be clear that what I am talking about is the use of AI to help teachers write the reports and not the use of AI to fully write the reports.   AI is good at some things such as consistency, objectivity, basic writing, however it lacks the humanistic side of things regarding relationships, perceived effort, motivation, etc, which a teacher brings to the mix.   As with a lot of applications of AI, I think the best can be had where it is AI combined with a human, maximising the strengths of each.

Feeding AI data

The key for AI report content is the data you provide along with the prompts directed at the Large Language Model.    From a data point of view we might simply seek to lift basic data already gathered and stored in the schools Management Information System (MIS).  This might include a score for effort, for homework, for behaviour, etc, plus a target and a current grade where this information is currently already gathered.    In my school we have experimented with this however the results feel a little bland given the relatively limited number of different permutations of the grades, plus the limited number of grade options.   To achieve more “personal” and individual reports requires more data however we need to balance this out with the resultant workload it might generate in terms of teachers having to gather and enter this data.

The approach used by www.teachmateai.com seems to provide a suggestion here in that its report generating solution asks teachers to input strengths and weaknesses.    Here the number of permutations jumps significantly as the options which are entered are only limited by a teachers imagination as to what constitutes a strength or a weakness.    Equally the data entry overhead needn’t be that significant.    I think back to teaching Btec qualifications some years ago and charting the achievement of the various grade descriptors so the students could see their progress and the areas they still need to work on.    A teacher could simply take this data or other data regarding the themes and topics covered, and enter this as the strengths and weaknesses, along with a couple of more individual comments per student and the resultant reports would appear reasonably personal to each student.

Data Protection

The DfE identified the risk associated with the creators of AI solutions sucking up huge amounts of data so data protection is something we need to consider in this process.   The DfEs own Generative AI in education (March 2023) guidance for example states:

“Generative AI stores and learns from data inputted. To ensure privacy of individuals, personal and sensitive data should not be entered into generative AI tools. Any data entered should not be identifiable and should be considered released to the internet”

So how do we generate student reports without entering personal data?    I think the key here is in ensuring the data provided isnt linked to an identifiable individual.   This aligns with GDPR where personal data relates to an identifiable living individual.   So if we anonymise the data, say by removing the name of the student before providing data to an AI, then we have reduced the risk given the actual student is not identifiable.   We can then add the correct name when we receive the response, the report, from the AI, with the full report then including the correct name.     This for me feels like the best approach however alternatively it would be possible to argue that providing a first name only, where first names would be often repeated, may also mean that the students are not individually identifiable and hence any risk is mitigated.   Either way it is for schools to consider the risk and make their decision accordingly, making sure to document this.

Example

I suppose the key where AI is helping with parental reports is, do they read well enough to be acceptable to parents so to that end I would like to provide an example based on data for a fictious student:

Sam demonstrates a solid performance in his History class. In lessons, he displays reasonably good engagement, and consistently produces work of a satisfactory quality for his grade range. Sam is thorough in completing his tasks and has great ideas. However, he is reluctant to get involved in some activities, which limits the extent of his engagement.

Would this pass your schools standards?    And remember it would be expected that the above would be read and adjusted by the relevant class teacher before going out.

Conclusion

For me, the use of AI to help with parental report writing seems like an easy win.   If it reduces the amount of time of required by teachers to create reports therefore allowing teachers to focus on other things, while still providing an appropriate and informative report for parents, then this is a good thing.

AI and AI and AI

Is AI a danger to education is a question I have recently explored, hopefully trying to present a balanced viewpoint.   This question however has an issue in that it asks about AI, as if AI was a simple item such as a hammer or a screwdriver.   The term AI covers a broad range of solutions, and as soon as you look at the breadth of solutions the question becomes difficult to answer, and in need of more explanation and context.  In effect, the question is akin to asking if vehicles are bad for the environment, without defining vehicles;  Is a bicycle for example bad for the environment?

[Narrow] AI

Although some may associate recent discussions of AI with ChatGPT and Bard, AI solutions have been around for a while, with most of us using some of these solutions regularly.    As I write this my word processor highlights spelling and grammar errors, as well as making suggestions for corrections.   The other day when using Amazon, I browsed through the list of “recommended for you” items which the platform had identified for me based on browsing and previous purchases.   I have made use of Google this morning to search for some content plus have used Google maps to identify the likely travel time for an event I am attending in the week ahead.   Also, when I sat down to use my computer this morning, I made use of biometrics in order to sign in plus used functionality in MS Teams in order to blur my background during a couple of Teams calls.   These are all examples of AI.   Are we worried about these uses?   No, not really, as we have been using them for a while now and they are part of normal life.   I do however note, that as with most things there are some risks and drawbacks but I will leave that for a possible future post.

The examples I give above are all very narrow focus AI solutions.   The AI has been designed for a very specific purpose within a very narrow domain area such as correcting spelling and grammar or identifying probable travel time or identifying the subject/individual on a Teams call then blurring everything which isnt the subject.   The benefits are therefore narrow to the specific purpose of the AI as are the drawbacks and risks.  But it is still AI.

[Generative] AI

Large language model development equally isnt new.  We might consider the ELIZA chatbot as the earliest example dating back to 1966, or if not, Watson dating to 2011.  Either way Large Language Models have been around in one way or another for some time, however ChatGPT, in my view, was a major step forward both in its capabilities but also in being freely available for use.   The key difference between narrow AI and Generative AI is in the fact Generative AI can be used for more general purposes.   You could use ChatGPT to produce a summary of a piece of text, to translate a piece of text, to create some webpage HTML, to generative a marketing campaign and many other purposes across different domains, with the only common factor being it produces text output from text-based prompts.   DALL-E and Midjourney do the same, taking text prompts, but producing images with similar solutions available for audio, video, programming code and much more.  

Generative AI as it is now, however doesn’t understand the outputs it produces.   It doesn’t understand the context of what it produces and it, when it doesn’t know the answer, may simply make it up or present incorrect information.   It has its drawbacks and it is still relatively narrow in terms of its limitations to taking text based prompts and responding based on the data it has been trained with.   It may be considered more “intelligent” than the narrow-focus AI solutions mentioned above but it is way short of human level intelligence, although it will outperform human intelligence in some areas.   It is more akin to dog like intelligence in its limited ability to preform simple repeated actions on request, taking a prompt, wading through the materials its been trained on, and providing an output, be this text, an image, a video, code, etc.   

A [General] I

So far, we have looked at AI as it exists now in narrow focussed AI and generative AI, however in the future we will likely have AI solutions which are closer to our human intelligence and can be used more generally across domains and purposes.    This conjures up images of Commander Data from Star Trek, R2-D2 from Star Wars, HAL from 2001 and the Terminator.   In each case the AI solutions are portrayed to be able to “think” to some extent, making their own decisions and controlling their own actions.    The imagery here alone highlights the perceived challenges in relation to Artificial General Intelligence (AGI) and the tendency to view it as good or potentially evil.   How far into the future we will need to look for AGI is unclear with some thinking the accelerating pace of AI means it is sooner than we would like, while others believe it is further into the future.    My sense is that AGI is still some time off as we don’t truly understand how our own human intelligence works and therefore, if we assume AI solutions are largely based on us as humans, then it is unlikely we can create an intelligence to match our own human, general, intelligence.    Others posit that as we create more complex AI solutions, these solutions will help in improving AI which would then allow it to surpass human capabilities and even create super intelligent AI solutions.   Cue the Terminator and Skynet.     Now again, I suspect when we get to the generation of AGI things will not be as simple as they seem, with all AGI’s not being equal.   I suspect the “general” may see some AGIs designed to operate generally within a given domain, such as health and medicine AGIs, or education AGIs, etc.       

Conclusion

Artificial Intelligence solutions can cover a wide range of solutions with my broad discussion of narrow AI, generative AI and AGI being only three broad categories where other areas exist.   It is therefore difficult to discuss AI in its totality certainly not with much certainty.   Maybe we need to be a little more careful in our discussions in defining the types of AI we are referring to, and this goes for my own writing as well where I have equally been discussing AI in its most general form.

Despite this, my viewpoint still remains the same, that AI solutions are here to stay, and as discussed earlier have actually been around for quite a while.    We need to look to accept this and seek to make the best from the situation, considering carefully how and when to use AI, including generative AI, as well as considering the risks and drawbacks.   As to AGI, and the eventual takeover of the world by our AI overlords, I suspect human intelligence will doom the world before this happens.  I also suspect AI development for the foreseeable future will see AI solutions continue to be narrower and short of the near human intelligence of AGI;  As such we definitely need to consider the implications, risks and dangers of using such AI solutions but we also need to consider the positive potential.

AI: A threat to the education status quo?

My original blog post on AI was meant to be a single post however the more I scribbled thoughts down the more I realised there was to consider.    And so with that, this is the fourth of my series of posts on AI.   Having looked at the issue of whether AI is a threat to education in post one, at some benefits of AI in post two, and then some of the risks or challenges around AI in post three, this post will continue to explore some of the ways in which AI might be considered a threat to the current formal education system as it exists across the world.

What are we assessing?

In the last post I started considering how AI challenges the current education system, looking at the fears regarding the use of AI based solutions, like ChatGPT, by students to “cheat”.   This concept of cheating is based on the current education system where students submit work to teachers, where the work is their own work to be used by the teachers to assess and confirm understanding.   So the use of AI to create work which the student presents as their own seems like cheating and dishonesty.  But what if the student only uses the AI as a starting point modifying and refining the content before submission;   Is this ok?    And what degree of refining is enough for the work to be considered as belonging to the student, and what degree is not enough and therefore represents cheating? When is AI a tool, fairly used by a student in proving their understanding and learning?

I think it is at this point we need to ask why we are asking students to complete coursework;  For me it is a way to check their understanding and learning of taught content.   It is one method but not the only one, although it is the method education has generally accepted as the current proxy for student understanding, whether it be GCSE coursework, A-Level or a Degree dissertation.    The uncomfortable truth is that this easy and scalable method of assessment isnt as appropriate in an age of AI.   I will however admit I am not sure what the alternative is, where such an alternative needs to be fair and also scalable to students the world over. When thinking of its scalability I always think, what if life was found on Mars and we have to scale our GCSE coursework and exams to encompass these new lifeforms; It would simply be a case of translating the requirements, sticking them on a rocket and sending them to Mars. As I said, the current setup is very scalable.

And then there is the question of, if my students can use the tools available to them, including AI, to reach an acceptable assessable outcome, is this not good enough?   If the assessments we create make it easy for a student to achieve without having any understanding of the topic or domain they are being assessed on, simply through the use of AI, then maybe we need to rethink the assessments we are setting students in the first place.

Social Contact

Social contact is another areas where there are various concerns around AI.    It may be that in using AI for our studies, our work and even through virtual friends, for companionship, we may see ourselves interacting with human beings less and less, where social contact in a key part of what it means to be human.   For education, if students find themselves learning through personalised AI, learning in their own time, what is the point in school?   And if there is no school, with students learning where and when they like, where will students learn social skills and the skills needed to live with and interact with other humans?    Will we be drawn to our screens and our devices?    Looking around at people on the train I sit on as I write this, I don’t feel we are that far from this scenario already.   So, what is the solution?   For me, in education we need to make sure and achieve a balance between technology and humanity.   If students are to do more learning via screens and personalised AI teachers, and where they may converse with their virtual AI friend, we also need to find opportunities for social interactions, for play, for fun, but also for arguments and debates, simply more opportunities for socialisation. And maybe this is the future for the schools and colleges, that these are the places for socialisation and developing social skills.

Conclusion

AI is here now and here to stay, and as a result of it we need to ask fundamental questions about education as it currently stands.   What are we trying to achieve?   Is the factory model of batches of students taught the same programme still appropriate?     How do we assess learning in a world of AI and actually what should we be assessing?

AI will keep progressing and if we don’t ask questions of our current educational system ourselves, AI will be threat the Times article suggested it will be, as AI will force the questions upon us.    And if education has changed little in over 100 years, I can only imagine how disruptive the sudden forced changes may be. But if we are pro-active it may be that AI is also an opportunity, an opportunity to challenge and reassess the current model of education to find something more suited for the years ahead, years which will invariably involve more and more AI solutions.

Dangers of AI in education

Am now onto my third post in my AI posts following the Times, “AI is clear and present danger to education” article.  In post one I provided some general thoughts (see here) while in post two I focused on some of potential positives associated with AI (see here) however now I would like to give some thought to the potential negatives.    Now I may not cover all the issues identified in the article however I will hope to address the key issues as I see them.

The need for guardrails around AI

One of the challenges with technology innovation is the speed with which it progresses.  This speed, driven by the wish of companies to innovate, is so quick that often the potential implications aren’t fully explored and considered.   Did we know about the potential for social media to be used to promote fake news or influence political viewpoints for example?   From a technology company point of view the resultant consequences may be seen as collateral damage in the bid to innovate and progress whereas others may see this as more a case of companies seeking profit at any cost.   One look at the current situation with social media shows how we can end up with negative consequences which we may wish we could reverse.   But sadly once the genie is out the bottle, it is difficult or near impossible to put back plus it does seem clear from social media that companies ability and will to police their own actions is limited.    We do however need to stop and remember the positives in social media, such as the ability to share information and news at a local level in real time, connectedness to friends and family irrespective of geographic limitations, leisure and entertainment value and a number of other benefits.

So, with a negative focus, the concern here in relation to the need for AI “guardrails” sounds reasonably well founded however who will provide these guardrails and if it is government for example, wont this simply result in tech companies moving to those countries with less guardrails in place. Companies are unlikely to want to slow down as a result of adhering to government guardrails where this may result in them ceding advantage to their competitors.    And in a connected world it is all the more difficult to apply local restrictions, especially as it is often so easy for end users to simply bypass such restrictions.    Also, if it is government, are government necessarily up to date, skilled, impartial, etc, to make the right decisions?    There is also the issue of the speed with which legislation and “guardrails” can be created, as the related political processes are slow especially when compared with the advancement of technology, so by the time any laws are near to having been passed the issues they seek to address may already have evolved into something new.  To be honest, the discussion of guardrails goes beyond education and is applicable to all sectors which AI will impact upon, with this likely to be most if not all sectors of business, public services, charities, etc.

Cheating

There has been lots of discussions of how students might make use of AI solutions to cheat, with risks to the validity of coursework being particularly notable.    There is clearly a threat here if we continue to rely on students submitting coursework which they have developed on their own over a period of time.   How do we know it is truly the students own work?    The only answer I can see for this is teacher professional judgement and questioning of their students but this approach isn’t scalable.    How can we ensure that teachers across different schools and countries question students in the same way, and make the same efforts to confirm the origin of student work?    Moderation and standardisation process used by exam boards to check teacher marking is consistent across schools won’t work here.    We will also need to wrestle with the question of what does it mean for submitted work to be the students “own” and “original” work.   Every year students submit assessments, and more and more gets written online, and now AI adds to the mix, and with this growing wealth of text, images, etc, the risk of copying, both purposely or accidentally continues to increase.   The recent course cases involving Ed Sheeran are, for me, an indication of this.     When writing and creating was limited to the few, plagiarism was easy to deal with, but in a world where creativity is “democratised” as Dan Fitzpatrick has suggested will occur through use of AI, things are not so simple.

Conclusion

The motives of tech companies for generating AI solutions may not always be in the best interests of the users.  They are after all seeking to make money, and in the iterate and improve model there will be unintended consequences.   Yet, the involvement of government to moderate and manage this innovation isn’t without its consequences, including where some governments own motives may be questionable.   

In looking at education, the scalable coursework assessment model has worked for a long period of time however AI now casts it into question, but was its adoption about being the right way to measure student learning and understanding, or simply the easiest method to do this reliably at scale?  

Maybe the key reason for AI being a threat is the fact that, if we accept it is unavoidable, it requires us to question and critique the approaches we have relied on for years, for decades and even for centuries.

The benefits of AI to education

This is the 2nd of a series of posts prompted by the Time article titled “AI is clear and present danger to education”.   The 1st part of the series of posts can be read here and focussed on some initial thoughts in relation to the headline.   In this post I would like to focus on some of the possible benefits that AI might bring to education the world over before getting to the the risks as mentioned in the Times article in subsequent posts.

Some benefits of AI

One benefit is the potential for AI to help with the teacher workload challenge through automating and assisting in some of the more routine tasks.   In my first post I identified the workload issue or as some would categorise it, crisis, as a challenge and threat to education in much the same way that AI is being categorised as a threat.   Having spent over 20 years working in education I have seen many things added to a teachers role and responsibilities but scarily few tasks or requirements ever removed.  Now AI wont remove things, but it should help make them easier.    Creating of lesson plans, course outlines and lesson resources, writing parental reports, dealing with emails and many other tasks can now be completed quicker through the use of AI.   Now I am being careful here in saying that such tasks will be done “quicker” rather than done by AI, as my view is that AI is a tool and that it is the professionalism of the teacher which will check and refine content produced by AI before its use.   Given the risks of bias within AI and incorrect information being presented the need for the human checking will remain for some time, however a human, with the aid of AI will be able to get things done quicker than without, either allowing more to be done or allowing more focus to be put on what matters, rather than more mundane tasks an AI can help with.   And in terms of “what matters” I would see this freeing up more time for teachers to focus on their students and the learning of these students.

 The potential for AI to engage more students the world over in high quality learning is also worthy of note.   I have long looked at the data teachers are requested to gather, which is often gather once, use once, and been concerned by the wealth of data and how little is actually done with it.     Most of the useful data gathered in relation to learning in classrooms is never actually recorded.  It is basically the day to day, minute to minute interactions of the teacher which then shapes how the teacher approaches their teaching and the learning.    But an online learning platform with AI can gather this data and more.  It can look at the delay time between a question and answer for each student.  It can look at the mouse movements, the time period of correct answers, versus wrong answers, it can look at the time of day, and all of this for every student using the platform.    Combined with appropriate AI it can direct the students to appropriate content to meet their needs, providing 1:1 advice and support, much in the way a teacher can.    AI can provide personalised 1:1 teaching and learning at a scale not currently possible.   Through AI based platforms students the world over can access personalised learning even where the education system in their home country may be lacking, although I note this relies of access to technology and those required to support technology.    It maybe that AI will draw focus on the digital divide and possibly widen it for those without access or without understanding of how AI might be used.    It may also be that AI will create educated individuals from countries and areas where conventional schooling has been lacking.   As I think about this Sugata Mitra’s “hole in the wall” experiment springs immediately to mind, albeit now with the AI power providing a personalised tutor to all those engaging with the technology.   I suspect with AI, Sugata’s experiment would have seen even more success in terms of student learning.

Conclusion

The issue of AI is not a binary issue of AI as a threat or saviour.    The idea of AI as a threat also has its issues in terms of popular media;  just think of the Terminator or HAL and you can see that perception may tend towards the negative, and that’s maybe a bit of an understatement.   The reality is that AI, like many other technology tools, will provide its benefits but also its risks and threats.   There will be those who use it carefully and responsibility,  those that use it carelessly and those that use it maliciously.   

But I can say the same about the humble hammer.

References:

Sugata Mitra’s Hole in the Wall Experiment (2017), Revise Sociology

AI ‘is clear and present danger to education’ (May 2023), The Times

AI is an opportunity for education

Reading the “AI is clear and present danger to education” article in The Times the other day conjured up images of Harrison Ford, political intrigue and the risk of the collapse of government.   Ok so I very much enjoyed the various Jack Ryan movies and particularly those starring Harrison Ford hence the imagery, however the articles focus was on concerns in relation to Artificial Intelligence (AI) and its potential impact on education, with the article citing head teachers as saying that AI “is the greatest threat to education”.  

The headline paints a nice simple picture however as I sat down to write this blog piece it was clear to me things aren’t that simple.   And as I wrote it became clear in my mind that this issue is complex indeed and that a single post wouldn’t allow me to do it any justice.    As such this therefore is the first of a series of posts discussing AI and the danger it may present, as well at the opportunities it may also present.

Is AI a threat to education?

Yes, but this is focussed purely on the risks and negative impact.   The question, can AI benefit education, would also result, in my view, in an affirmative response.    Is a hammer a threat or something of benefit?   It depends on who is wielding it and for what purpose, the hammer is but a tool although I note that AI is a far more powerful and flexible tool, for good or for ill.

We also need to ask the question of what we mean in terms of education.   Education in its broadest sense, such as when a parent models behaviour for their child, or in the sense of the organisations and constructs of the formal education systems the world over.   My reading of the article leads me to believe that the threat is to the current education system, processes and practices.   This system, processes and its practices have long had those critical of its fitness for purpose in the modern world with the late Ken Robinson being one of these.   His changing paradigms Ted talk dates back to 2010, so 13 years ago.   So maybe a threat to the current education practices may be a good, and possibly overdue, outcome.  After all, little has changed in how the education system works globally in the last 100 years.    Maybe AI is a much needed catalyst for educational change. 

I also note the article didn’t apply a time frame for this threat.   I have seen a post on social media recently suggesting a 50% risk of AI causing a catastrophe resulting in the loss of most human life in the world, where it also didn’t provide a time frame.   Looking far enough into the future you are always going to be able to get to a point where, between now and then, a 50% risk occurs however thinking about global warming, war, political divides, etc, I suspect we will reach a point where there is a 50% chance of human intelligence leading to a catastrophe resulting in the loss of most human life in the world before the same risk in relation to AI is reached, assuming we aren’t already there.

We also need to acknowledge there are other threats to education including challenges providing access to education for all students across the world, workload issues as the education sector continues to seek to improve by adding more requirements and tasks to a teachers role each year and the challenge of teacher shortages.    The solution to these issues is unlikely to involve maintaining the current status quo and current education system, so maybe these issues should also be seen as a threat to the current education system.     AI can be viewed as a threat, but it is far from the only one.

Conclusion

AI has potential to be a threat to the current education systems and processes but maybe a catalyst for change has been needed for some time.   That said, AI could have a negative impact on education however I would suggest it could also have a positive impact too.   The likelihood in my view is that we have bit of both eventualities and some positive and negative impacts however AI is here now and is not going away.   And if strict restrictions are put in place, either people will bypass these or the companies creating AI solutions will simply move to jurisdiction where the restrictions are less strict.   But AI solutions will continue to be created, continue to advance and continue to be used.    My view therefore is that we need to view AI as yet another technology tool, albeit one of the most significant in history, where we need to embrace its use, shaping it to have the positive impact we wish to see it have, while seeking to remain aware of the risks and to mitigate these as much as possible.

So maybe the newspaper articles title should have been: AI is clear and present danger and opportunity for education

Sadly I don’t think the above makes for quite as snappy a headline.

References

AI is ‘clear and present danger to education’ (May 2023), The Times