The benefits of AI to education

This is the 2nd of a series of posts prompted by the Time article titled “AI is clear and present danger to education”.   The 1st part of the series of posts can be read here and focussed on some initial thoughts in relation to the headline.   In this post I would like to focus on some of the possible benefits that AI might bring to education the world over before getting to the the risks as mentioned in the Times article in subsequent posts.

Some benefits of AI

One benefit is the potential for AI to help with the teacher workload challenge through automating and assisting in some of the more routine tasks.   In my first post I identified the workload issue or as some would categorise it, crisis, as a challenge and threat to education in much the same way that AI is being categorised as a threat.   Having spent over 20 years working in education I have seen many things added to a teachers role and responsibilities but scarily few tasks or requirements ever removed.  Now AI wont remove things, but it should help make them easier.    Creating of lesson plans, course outlines and lesson resources, writing parental reports, dealing with emails and many other tasks can now be completed quicker through the use of AI.   Now I am being careful here in saying that such tasks will be done “quicker” rather than done by AI, as my view is that AI is a tool and that it is the professionalism of the teacher which will check and refine content produced by AI before its use.   Given the risks of bias within AI and incorrect information being presented the need for the human checking will remain for some time, however a human, with the aid of AI will be able to get things done quicker than without, either allowing more to be done or allowing more focus to be put on what matters, rather than more mundane tasks an AI can help with.   And in terms of “what matters” I would see this freeing up more time for teachers to focus on their students and the learning of these students.

 The potential for AI to engage more students the world over in high quality learning is also worthy of note.   I have long looked at the data teachers are requested to gather, which is often gather once, use once, and been concerned by the wealth of data and how little is actually done with it.     Most of the useful data gathered in relation to learning in classrooms is never actually recorded.  It is basically the day to day, minute to minute interactions of the teacher which then shapes how the teacher approaches their teaching and the learning.    But an online learning platform with AI can gather this data and more.  It can look at the delay time between a question and answer for each student.  It can look at the mouse movements, the time period of correct answers, versus wrong answers, it can look at the time of day, and all of this for every student using the platform.    Combined with appropriate AI it can direct the students to appropriate content to meet their needs, providing 1:1 advice and support, much in the way a teacher can.    AI can provide personalised 1:1 teaching and learning at a scale not currently possible.   Through AI based platforms students the world over can access personalised learning even where the education system in their home country may be lacking, although I note this relies of access to technology and those required to support technology.    It maybe that AI will draw focus on the digital divide and possibly widen it for those without access or without understanding of how AI might be used.    It may also be that AI will create educated individuals from countries and areas where conventional schooling has been lacking.   As I think about this Sugata Mitra’s “hole in the wall” experiment springs immediately to mind, albeit now with the AI power providing a personalised tutor to all those engaging with the technology.   I suspect with AI, Sugata’s experiment would have seen even more success in terms of student learning.

Conclusion

The issue of AI is not a binary issue of AI as a threat or saviour.    The idea of AI as a threat also has its issues in terms of popular media;  just think of the Terminator or HAL and you can see that perception may tend towards the negative, and that’s maybe a bit of an understatement.   The reality is that AI, like many other technology tools, will provide its benefits but also its risks and threats.   There will be those who use it carefully and responsibility,  those that use it carelessly and those that use it maliciously.   

But I can say the same about the humble hammer.

References:

Sugata Mitra’s Hole in the Wall Experiment (2017), Revise Sociology

AI ‘is clear and present danger to education’ (May 2023), The Times

AI is an opportunity for education

Reading the “AI is clear and present danger to education” article in The Times the other day conjured up images of Harrison Ford, political intrigue and the risk of the collapse of government.   Ok so I very much enjoyed the various Jack Ryan movies and particularly those starring Harrison Ford hence the imagery, however the articles focus was on concerns in relation to Artificial Intelligence (AI) and its potential impact on education, with the article citing head teachers as saying that AI “is the greatest threat to education”.  

The headline paints a nice simple picture however as I sat down to write this blog piece it was clear to me things aren’t that simple.   And as I wrote it became clear in my mind that this issue is complex indeed and that a single post wouldn’t allow me to do it any justice.    As such this therefore is the first of a series of posts discussing AI and the danger it may present, as well at the opportunities it may also present.

Is AI a threat to education?

Yes, but this is focussed purely on the risks and negative impact.   The question, can AI benefit education, would also result, in my view, in an affirmative response.    Is a hammer a threat or something of benefit?   It depends on who is wielding it and for what purpose, the hammer is but a tool although I note that AI is a far more powerful and flexible tool, for good or for ill.

We also need to ask the question of what we mean in terms of education.   Education in its broadest sense, such as when a parent models behaviour for their child, or in the sense of the organisations and constructs of the formal education systems the world over.   My reading of the article leads me to believe that the threat is to the current education system, processes and practices.   This system, processes and its practices have long had those critical of its fitness for purpose in the modern world with the late Ken Robinson being one of these.   His changing paradigms Ted talk dates back to 2010, so 13 years ago.   So maybe a threat to the current education practices may be a good, and possibly overdue, outcome.  After all, little has changed in how the education system works globally in the last 100 years.    Maybe AI is a much needed catalyst for educational change. 

I also note the article didn’t apply a time frame for this threat.   I have seen a post on social media recently suggesting a 50% risk of AI causing a catastrophe resulting in the loss of most human life in the world, where it also didn’t provide a time frame.   Looking far enough into the future you are always going to be able to get to a point where, between now and then, a 50% risk occurs however thinking about global warming, war, political divides, etc, I suspect we will reach a point where there is a 50% chance of human intelligence leading to a catastrophe resulting in the loss of most human life in the world before the same risk in relation to AI is reached, assuming we aren’t already there.

We also need to acknowledge there are other threats to education including challenges providing access to education for all students across the world, workload issues as the education sector continues to seek to improve by adding more requirements and tasks to a teachers role each year and the challenge of teacher shortages.    The solution to these issues is unlikely to involve maintaining the current status quo and current education system, so maybe these issues should also be seen as a threat to the current education system.     AI can be viewed as a threat, but it is far from the only one.

Conclusion

AI has potential to be a threat to the current education systems and processes but maybe a catalyst for change has been needed for some time.   That said, AI could have a negative impact on education however I would suggest it could also have a positive impact too.   The likelihood in my view is that we have bit of both eventualities and some positive and negative impacts however AI is here now and is not going away.   And if strict restrictions are put in place, either people will bypass these or the companies creating AI solutions will simply move to jurisdiction where the restrictions are less strict.   But AI solutions will continue to be created, continue to advance and continue to be used.    My view therefore is that we need to view AI as yet another technology tool, albeit one of the most significant in history, where we need to embrace its use, shaping it to have the positive impact we wish to see it have, while seeking to remain aware of the risks and to mitigate these as much as possible.

So maybe the newspaper articles title should have been: AI is clear and present danger and opportunity for education

Sadly I don’t think the above makes for quite as snappy a headline.

References

AI is ‘clear and present danger to education’ (May 2023), The Times

An AI divide?

Artificial Intelligence (AI) is the big talking point at the moment with all its many potential benefits, along with some risks and challenges.   One of the challenges however that doesn’t seem to have been discussed as often is that of digital divides, where AI might represent yet another divide been the haves and have nots and the can and can nots.

Digital Divides

The term digital divide refers to gaps between people or communities who have access to and use of digital technologies such as computers, smartphones, and the internet, and those who do not. This gap can be attributed to a variety of factors, including socioeconomic status, geographic location, age, race, and education level.

Before considering AI as an additional divide, there were several different types of digital divides that exist. Some examples include:

Access divide: This refers to differences in physical access to digital technologies, such as lack of broadband internet access in certain areas, lack of availability of computers or smartphones, or lack of access to digital skills training.

Usage divide: This refers to differences in how people use digital technologies, such as differences in the types of devices people use, how often they use them, and what they use them for.

Skills divide: This refers to differences in digital literacy and skills, such as the ability to use digital technologies effectively and safely, the ability to access and evaluate online information, and the ability to create and share digital content.

Content divide: This refers to differences in the availability and quality of digital content, such as differences in access to online educational resources, news and information, and cultural and entertainment content.

Economic divide: This refers to differences in the economic benefits and opportunities that digital technologies can provide, such as differences in access to online job opportunities, e-commerce, and digital financial services.

The AI Divide

Artificial Intelligence represents a potential additional divide although the issues may sit under the above divides.   Access to Artificial Intelligence solutions, the relevant skills and understanding to make the appropriate use of AI and the resources to make use of AI.     Personally, I present AI as a new additional divide rather than one contained in the above due to what I see as the wide ranging potential impact which AI can have on the world as it is now.   In my area, that of education, I feel this is particularly relevant.  Aside from student access to technology, skills, etc, there are some schools who will seek to explore use of AI solutions whereas in other cases there may be a drive to block, filter or control access.

Considering the divide that AI may create I can see issues for those who do not have access or do not have the skills to use AI.   Those who do may become more efficient through the use of AI to carry out more mundane tasks or to provide the basic starting point for task rather than users having to start from scratch.   The likes of the 30mins challenge which shows how much more might be possible through the use of AI tools illustrates this nicely.   From a creativity point of view, AI might as Dan Fitzpatrick has said “democratise creativity” meaning those who can and do use AI may have greater potential for creative outputs than those who do not or can not use AI.   And that is but two areas where AI use and understanding may create a divide, and I suspect there are many others.

Conclusion

We wish there to be equitable treatment for all however the ongoing discussion of digital divides highlights, although progress may be being made, that we aren’t there yet.   The increasing discussion and use of Artificial Intelligence adds yet another factor which can create a digital divide and therefore negatively impact on equity.   We need to be conscious of this in the same way as we are conscious of the other challenges of AI including bias, attribution, accuracy, etc.

Originality

Producing original content is a fundamental aspect of creating meaningful and valuable information for audiences across various mediums. In terms of assessment within schools, colleges and universities, students are expected to produce “original” work to evidence their learning.

So, what does it mean to produce original content? At its core, originality means creating something that is entirely your own. This could be a new idea, a fresh perspective on a familiar topic, or a unique approach to storytelling. Whatever the case may be, originality is about bringing something new and valuable to the table that hasn’t been seen before.

But lets flip that premise;  There are a limited number of words available and these words are shared with all writers for all time, so as people continue to write the probability of two people writing the same thing can only increase.   It’s a bit like buying a lottery ticket.   The more tickets you buy and the longer period over which you buy them, the more likely you will hit the winning numbers.   And that analogy may fit in other ways in that the probabilities of a winning lottery ticket and an exact match of wording and phraseology may be similarly unlikely.    And the longer the piece of writing the less likely whereas for shorter pieces of text, the probability is greater.   But either way it isn’t impossible!   

Let’s step back for a moment and look at an academic concern, that of plagiarism. Plagiarism is the act of taking someone else’s work, ideas, or words and presenting them as your own. It’s a form of intellectual theft and can have serious consequences, including invalidating qualifications or exam results for students who are caught.   “Taking someone else’s work” and “presenting as your own”;  But if I read something, agree with it, and then present it as my viewpoint haven’t I just taken someone else’s work and presented as my own?    Does writing it in my own words make it original and my own contribution and at what point?    How many words do you need to change before it becomes my original contribution as opposed to plagiarism?   I note that the plagiarism detection services I have used in the past present a plagiarism score which tries to quantify how similar a piece of work is to other pieces of student work on file.  And if I combine with readings from other sources is this better or just plagiarising from a number of sources?     And what if I get AI to write the first draft of the content, then I refine it?    Is this plagiarising from the multiple sources the AI used as training data or simply plagiarising from the AI, or maybe it isn’t plagiarism at all? Considering art work rather than writing, if I get an AI to produce a self portrait of Van Gogh but painted in the style of Monet, who have I plagiarised?

I don’t believe the concept of originality and of plagiarism, beyond plagiarism of a paragraph of cut and paste text, was ever an easy issue in schools albeit we have treated it as easy in the past.   With AI this issue becomes that bit more complex and difficult to traverse.    We may present our students with the assessment and with a marks scheme, but do we need to start providing more discussion in relation to originality, and what acceptable use of AI platforms might look like?    I suppose the challenge here is do we know what this might look like.    

But a bigger question may be why we ask for these written assessments to be completed in the first place;   Is the written work a proxy for evidence of learning and understanding, where this is easier, and possibly more reliable, than actually having a discussion with each and every student to check their understanding?   And if we can no longer rely to the same extent on the piece of extended written work do we need to move to more student/teacher discussions, but if so, how will we address bias and other factors impacting on individual teacher assessment of students?

Conclusion?

Am not sure the above has presented any answers beyond presenting some of my musings and more questions.    But for now that maybe enough, to try and add to the discussion in relation to education and how it may look in the future given effective AI solutions are already available to our students.

References

Written with the help of ChatGPT (OpenAI)

Technology and Exam boards: Time to modernise?

I recently received a request from a teacher in relation to getting some software installed on their school device to support them in marking for an exam board.    Now I know this isn’t part of their school role however having been a standards moderator in the past, I understand the benefits to schools and colleges of having markers or moderators within teaching departments.   I am therefore eager to try and enable staff by supporting such requests however this request involved a piece of software which requires admin rights to the laptop, both for install and for the operation of the application according to the exam board.   When the concern re: cyber security was raised the exam boards final reply was that the staff member should install the software on a personal rather than school laptop.   This got me thinking about how technology has changed but how exam boards have been slow to change.   This is all the more evident currently.   Just look at the advances in Large Language Models (LLMs) with ChatGPT over the last six months.

Traditionally, examination boards have relied on paper-based tests and manual grading systems. However, these methods have several drawbacks, including the potential for errors and delays in results processing.    One way examination boards could modernize is by moving towards computer-based testing. Computer-based testing allows for faster and more accurate grading, as well as the ability to customize exams to the specific needs of each student.  I very much believe that adaptive testing is the way forward, with this also enabling students to take exams in their own time when they are ready as opposed to at a set time with all other students.   Adaptive testing also supports students taking their tests anywhere, including at home, rather than having to be crammed into a large exam hall where the conditions themselves are not exactly designed for optimum student performance.    Additionally the results would be available much quicker reducing the stress associated with a long waiting period between the exams and the results being released.   There is also the potential benefit in the reduction in the amount of paper used in exams, transporting of these papers, etc, which may help with making the exam process more environmentally friendly.

Another way examination boards can modernize is by utilizing artificial intelligence (AI) in the grading process. This appears all the more relevant at the moment with development in LLMs like Chat-GPT.   AI-powered grading systems can quickly and accurately grade exams, allowing for quicker results processing and reducing the potential for errors. AI can also analyse student performance data to provide insights into areas where students may need additional support and guidance.   Now I note here that the use of AI may introduce new errors to the marking process however I would suggest that the volume or magnitude of these errors when compared with human based marking is likely to be lower.  It isn’t “the solution” to errors but definitely a step in the correct direct.

Related to the above, exam boards need to acknowledge the existence of AI and LLMs and the fact that they will become an increasing part of life and therefore a tool which students will increasingly use in their studies be it for revision, to help in developing critical thought or for creating coursework or other learning content.   So far only IB (International Baccalaureate) have really acknowledged ChatGPT and how they see it impacting on their courses, providing at least some steer for schools on what appropriate or inappropriate usage might look like and proving at least some direction for schools and teachers for managing these new technologies.

Moreover, examination boards can use technology to improve exam security. Online proctoring tools can help ensure that students are taking exams in a secure and controlled environment, preventing cheating and other forms of academic dishonesty.    Related to this, I have seen exam boards continuing to send out resources on CDs or USB drives, or requesting student video or audio work using similar formats.   It is about time that they provided appropriate online portals to allow the quick, efficient and secure transfer of such exam and coursework data.  

Finally, examination boards can use technology to make their exams more accessible to students with disabilities or special needs. For example, screen readers, text-to-speech software, and other assistive technologies can help students with visual or hearing impairments to take exams on an equal footing with their peers.   This is already happening for a subset of students however I suspect eventually will need to acknowledge that all students are individual and having differing learning preferences including their device use and the online tools they use.  In classrooms teachers support students using a range of tools and techniques so it is only correct to seek to support the same in the final exams which are, at least for now, viewed as so critical in a students format education.   As such examination boards will need to adapt to this.

Conclusion

Technology has the potential to revolutionize the examination process, making it more efficient, accurate, and accessible. Examination boards must embrace these technological advancements to ensure that their exams are of the highest quality and that students receive accurate and timely results. By doing so, they can help prepare the next generation of students for success in a rapidly changing digital world.   

And at a time when the pace of technology, particularly in relation to Artificial Intelligence solutions, has never been faster, the exam boards will need to significantly increase their agility and their ability to adapt to and embrace change.

AI and Learning Platforms

Software learning platforms which come complete with learning content for students to work through are not new.   I remember an online Maths programme from my days as a university student as I was studying to become a teacher back in the late 90’s.   Basically, you worked through content and then were presented with different options as to how you progressed through the programme.    As a learner the individual modules of content were pretty much fixed, having been written into the software, but the path through the wider programme of learning was up to me.    I was provided options as to how I progressed from one module to the next.   Now, I was never a great fan of this as each module was presented in a given way and worked through examples in a given way, as it was programmed to do.  If you didn’t understand the way it was presented then there was no help or way to progress through this module although you could move to further modules in the hope they would provide you with insight which might eventually get you past this issue.    I liked the idea of online programmes and self paced learning however had concerns about user motivation, especially when you hit concepts which provide difficult for you to understand, and about the fixed nature of the content materials;   A great teacher adjusts and customises their learning materials and approach to their class and the individual students within it.   As such the self paced learning aspect was a step forward but this was about as far as it goes.

Fast forward to more recently and little progress had been made, at least as far as I saw it.   Newer learning platforms are capable of gather much more diagnostic data and analytics which allow the developers and content writers to adjust and improve their content.   So, the content is better than the content I experienced in the 90’s but generally it still provides largely linear and fixed content and if the content, its style, etc don’t match your needs then there is little that can be done.   As so, until very recently I have had a largely negative view of learning platforms which come complete with the vendors own content which teachers cannot adjust or customise to their content.   They have their place for example supplementary to classroom teaching or self paced learning when teachers are absent but that was it.

That was until recently when I saw a video of some new developments within the Khan Academy platform including its new use of the GPT4 Large Language Model (LLM).    Still the content in terms of problems set within the platform and the way they are worked through appears very linear and fixed.  So if it is maths problems it will work through the problem in a specific way;  no change there.   The difference, and the massive leap forward in terms of learning platforms is their new chat bot style assistant.   It prompts and supports the student using the platform.   It identifies common misconceptions and provides guidance.   It acts as a coach and facilitator but customising its responses to the efforts being made by the student using the platform and this includes providing motivational “well dones” and corresponding emojis.    Watching the demo it was almost as if there was a teacher sat behind the chatbot rather than an AI solution.    Now I note that this demo was short and was for the purposes of showing off what is possible in the Khan Academy platform so may not be fully representative of how it all looks and feels in real life, however if the final product is anything close to this then it is a major shift forward.

Flipped learning has been a concept long discussed looking at releasing teachers from supporting students practice of learning concepts however maybe AI solutions like GPT4 and its use in Khan Academy will allow us to release teachers from more of the basic learning.    Maybe the AI and learning platform can be used here, allowing teachers to act more as facilitators rather than delivering new learning, and allowing them to focus much more on the high order skills of creativity, critical thinking and the like.    

AI and large language models could potentially facilitate significant shifts in what learning in our schools and colleges looks like, not in the distant future, but in the very near future indeed.

ChatGPT and IT Services

I recently wrote an article for the ANME on ChatGPT and on the benefits but also risks.   You can read this here.   My view is that AI models like ChatGPT are going to become all the more common and also more and more accurate, and therefore we need to explore them and identify how they might be positively used within education.   Seeking to block their use is, in my opinion, guaranteed to fail.   

Following my post, I saw a reply on twitter to the article with ChatGPTs view on AI and education.  You can see this here.    It picked up a couple of points which I hadnt included in my piece and I note that some of my piece actually included content generated by ChatGPT itself.    It wasn’t obvious that ChatGPT had a hand in both pieces which suggests it wont be easy to identify where ChatGPT is used.

All this got me thinking about how ChatGPT might benefit IT Services and the IT teams particularly in schools.   As such I gave some quick thoughts as to possible uses cases, which I have outlined below:

User guides and Help

ChatGPT can be used to create a knowledge base of information that can be easily accessed by IT staff and other school personnel including simple user and help guides.  This seems like the most obvious and easiest use of ChatGPT;  I have already tried asking it some questions in relation to iPad related issues and its responses were clear and accurate.

Creating software and other solutions

Where schools are creating their own internal software solutions including website solutions, ChatGPT can help with the basic code building blocks, thereby speeding up development.   It will still require human input to finalise the projects and add that bit of creativity and flair however ChatGPT can get us part of the way there, thereby saving time and resources.

Policies, processes and procedure documentation

Writing policy and process documentation can quite often be a long and laborious job but ChatGPT and other AI language models can quickly put together a basic document which human staff can then refine and customise to fit the school.

Chatbots

ChatGPT can be used to create a chatbot that can interact with students and staff, answering questions and providing information.   This therefore allows IT support staff to focus on more complex issues or more strategic tasks.

Language Translation

Where schools include non-English speaking students ChatGPT can be used to assist IT support staff in communicating with non-English speaking students and families by providing translations in real-time.

Process automation

A number of the above relate to process automation where ChatGPT is used to automate common support tasks, such as answering frequently asked questions, troubleshooting basic technical issues, and providing instructions for software and hardware.   There are likely other areas where simple processes can be automated through the ChatGPT or other AI Language models.

Conclusion

I think one of the key conclusions I arrive at from my thinking is not related to the benefit of using ChatGPT, or other AI language models, in itself, but for the potential for ChatGPT and a human user to work together.   This hybrid approach of AI and human is, in my view, the way forward as both complement each other.  The AI solution can easily do the basic and repeatable parts of a task, such as creating a user guide, while the human can bring that flair and creativity to make such guides engaging, accessible and usable.    It isnt a case of ChatGPT or humans, or ChatGPT replacing humans.

I suspect there are many other applications of ChatGPT within an IT Support or IT Services capacity which are yet to be realised and I look forward to finding out more in terms of how AI Language Models can enable IT staff to deliver, enhance and even redefine the services provided to users in schools and colleges, and to the communities they serve.

These are interesting times!

Its only Artificial Intelligence!

Meta released a chatbot for use in the US where its responses are based on internet based data.   It wasn’t long before the chatbot was being less than positive about Meta’s CEO Mark Zuckerberg.   Overall, a bit of a novelty but it might also give us a little bit of insight into the Artificial Intelligence or Machine Learning algorithms which underpin an increasing number of the services we use online.

It is highly unlikely that Meta specifically programmed their chat bot to suggest that the CEO did “a terrible job in testifying before congress” however this is the feedback it provided upon being asked “what did you think of mark Zuckerberg”.    This response is likely the result of the chatbot analysing data sources on the internet and identifying this response as most likely to be true, or at least true in the perceptions of those sharing their thoughts online.   So here we see a couple of problems:

  1. As users and even developers, we will not necessarily be able to identify how the response was arrived at.   It’s a black box system;  We can see the inputs and the outputs but not the process.    Considering this should make us a little bit nervous as, especially for important decisions, it would be nice to understand how the answer an algorithm provides was arrived at.   Imagine an AI being used in assessing mortgage applications;  How would you feel if no-one can example why your application was refused?    From a user point of view, as a black box system, there is also the danger that the service provider does have control over the algorithm and therefore can directly influence and control feedback to suit their own needs.  In this case the black box system provides a smoke screen for potentially unethical practices.
  2. The chatbot repeats what it sees to be true or the commonly held belief, based on the data sources it accesses.  Bias could easily be introduced here through the internet sources which the chatbot is provided access to or through the queries it might use in identifying pertinent information.   We should be naturally questioning of a solution which may be inherently biased.   One example of this is the issues surrounding facial recognition where the AI was trained largely on white rather than coloured faces, due to the predominant skin colour among those developing the AI solution.  As such we ended up with AIs which did a poorer job of facial recognition when presented with faces with non-white skin colour.
  3. Again, relating to the repetition of commonly held belief, the chatbot may simply act as an echo chamber for commonly held beliefs, disregarding minority views.    And if a number of chatbots were to be used together they might be able to powerfully shape the truth on social media channels through repeatedly posting.

Some of the above is of concern but then I start to think about the alternative and a human rather than AI based system.    Humans are not transparent in their thinking processes although they might seek to explain how they arrived at a solution, we rely on sub-concious influences and decision making processes to which we have no access.    Humans equally like an AI based system may be biased or may seek to service their own needs or the needs of their employer.    And humans also tend towards the likeminded, which therefore creates the echo chambers mentioned above.    So maybe AI is no more problematic than a human based solution.   

Is the challenge therefore that AI is technology rather than a human being like us?   Is it maybe that this difference may influence our feeling of unease or unhappiness with the risks mentioned above, and that we simply accept similar issues in human based processed because, after all, we are “only human”?

AI in schools

I recently read an article discussing how AI might be used in schools from 2025 onwards.   This seems like a reasonably logical bit of future prediction but on reflection I quickly came to identify some concerns.

Firstly, AI can cover a very broad range of activities.   Is it AI designed to interpret natural language such as your Alexa can identify and then respond to you verbal queries, or are we talking about a more general AI solution more akin to Commander Data in Star Trek?    There is quite a gulf between these two extremes, with the 2nd of them likely to be some time off before it is achievable.

If we therefore accept we are looking at using specific focussed AI solutions in schools by 2025 I think they have clearly got the year wrong as we are already doing it now, in 2022.    We have our spell checker and grammar checker in Word, we also now have our transcription tools in Teams and PowerPoint including the ability to offer real time, or near real time, translation of spoken content.  These are all AI or maybe machine learning based solutions being used in schools and colleges, being used by teachers today.   Not 3 years away in 2025, but today.

So, the headline seems on initial inspection to be quite aspirational and inspirational, for teachers to be using artificial intelligence in their classrooms in only 3 years time.   But a more detailed look and we find it isnt so inspirational as we are pretty much already there.   Maybe the headline hints to a greater use of AI or more advanced AIs being used more often and to greater effect but that’s not the way the headline comes across.   Maybe we will use more AI based platforms, such as learning platforms which direct students through personalised learning programmes, although I have some concerns about this too.  Or maybe there will be greater use of AI and machine learning in the setting and marking of both summative and formative assessments.

I suspect AI use in schools will grow between now and 2025.    I suspect it will grow to be more common in general so wont be a school centric thing, however I suspect that a teacher will still be a teacher and the key to teaching and learning, and the use of AI tools, like the current EdTech tools, will be skilled teachers to wield them as and when appropriate in crafting the best possible learning experience for their students.

AI and Bias

I recently saw an article in the guardian regarding a call from an Artificial Intelligence expert to cease using AI in the UK due to concerns that they were “infected with biases” and couldn’t be trusted (McDonald, 2019).

I too have concerns in relation to bias in AI, particularly in relation to AIs as black box systems where we are unable to ascertain how an AI might have arrived at a specific decision.    For example, the guardian article references immigration related applications of AI, so an AI might decide to approve or reject an immigration application based on the data it has available to it.    The danger here, in my view, is the potential lack of transparency in relation to the AIs decision making process.  

Despite my concerns, I however do not advocate banning AI use, as the alternative to using AI is to use human decision making.    Human decision making is far from lacking in bias.   In Sway (2020), by P. Agarwal, the author states “we are all biased – to a certain degree” going on to discuss in detail human bias and particularly unconscious bias.   Agarwal also states that “we cannot erase our biases completely” plus in relation to technology use, suggests that technology solutions, which therefore includes AI, “incorporate the biases from the designers and data engineers” who design them.   As such it doesn’t seem fair to hold AIs up to a standard, that of being absent of bias, when the human designers, users, etc of such systems are themselves unable to achieve this standard.

For me the critical issue is being aware of the bias which may exist and seeking to mitigate and manage the resultant risks.   We have to accept that bias is unavoidable, it is unavoidable in we humans, and also unavoidable in the systems and AIs we may create.    It is due to this need for awareness that my concern regarding the potential lack of transparency arises.

References:

Mcdonald, H. 2019. AI expert calls for end to UK use of ‘racially biased’ algorithms. [Online]. [27 December 2020]. Available from: https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey

Agarwal, P (2020). Sway: Unravelling Unconscious Bias. United Kingdom: Bloomsbury Publishing.