An AI divide?

Artificial Intelligence (AI) is the big talking point at the moment with all its many potential benefits, along with some risks and challenges.   One of the challenges however that doesn’t seem to have been discussed as often is that of digital divides, where AI might represent yet another divide been the haves and have nots and the can and can nots.

Digital Divides

The term digital divide refers to gaps between people or communities who have access to and use of digital technologies such as computers, smartphones, and the internet, and those who do not. This gap can be attributed to a variety of factors, including socioeconomic status, geographic location, age, race, and education level.

Before considering AI as an additional divide, there were several different types of digital divides that exist. Some examples include:

Access divide: This refers to differences in physical access to digital technologies, such as lack of broadband internet access in certain areas, lack of availability of computers or smartphones, or lack of access to digital skills training.

Usage divide: This refers to differences in how people use digital technologies, such as differences in the types of devices people use, how often they use them, and what they use them for.

Skills divide: This refers to differences in digital literacy and skills, such as the ability to use digital technologies effectively and safely, the ability to access and evaluate online information, and the ability to create and share digital content.

Content divide: This refers to differences in the availability and quality of digital content, such as differences in access to online educational resources, news and information, and cultural and entertainment content.

Economic divide: This refers to differences in the economic benefits and opportunities that digital technologies can provide, such as differences in access to online job opportunities, e-commerce, and digital financial services.

The AI Divide

Artificial Intelligence represents a potential additional divide although the issues may sit under the above divides.   Access to Artificial Intelligence solutions, the relevant skills and understanding to make the appropriate use of AI and the resources to make use of AI.     Personally, I present AI as a new additional divide rather than one contained in the above due to what I see as the wide ranging potential impact which AI can have on the world as it is now.   In my area, that of education, I feel this is particularly relevant.  Aside from student access to technology, skills, etc, there are some schools who will seek to explore use of AI solutions whereas in other cases there may be a drive to block, filter or control access.

Considering the divide that AI may create I can see issues for those who do not have access or do not have the skills to use AI.   Those who do may become more efficient through the use of AI to carry out more mundane tasks or to provide the basic starting point for task rather than users having to start from scratch.   The likes of the 30mins challenge which shows how much more might be possible through the use of AI tools illustrates this nicely.   From a creativity point of view, AI might as Dan Fitzpatrick has said “democratise creativity” meaning those who can and do use AI may have greater potential for creative outputs than those who do not or can not use AI.   And that is but two areas where AI use and understanding may create a divide, and I suspect there are many others.

Conclusion

We wish there to be equitable treatment for all however the ongoing discussion of digital divides highlights, although progress may be being made, that we aren’t there yet.   The increasing discussion and use of Artificial Intelligence adds yet another factor which can create a digital divide and therefore negatively impact on equity.   We need to be conscious of this in the same way as we are conscious of the other challenges of AI including bias, attribution, accuracy, etc.

Originality

Producing original content is a fundamental aspect of creating meaningful and valuable information for audiences across various mediums. In terms of assessment within schools, colleges and universities, students are expected to produce “original” work to evidence their learning.

So, what does it mean to produce original content? At its core, originality means creating something that is entirely your own. This could be a new idea, a fresh perspective on a familiar topic, or a unique approach to storytelling. Whatever the case may be, originality is about bringing something new and valuable to the table that hasn’t been seen before.

But lets flip that premise;  There are a limited number of words available and these words are shared with all writers for all time, so as people continue to write the probability of two people writing the same thing can only increase.   It’s a bit like buying a lottery ticket.   The more tickets you buy and the longer period over which you buy them, the more likely you will hit the winning numbers.   And that analogy may fit in other ways in that the probabilities of a winning lottery ticket and an exact match of wording and phraseology may be similarly unlikely.    And the longer the piece of writing the less likely whereas for shorter pieces of text, the probability is greater.   But either way it isn’t impossible!   

Let’s step back for a moment and look at an academic concern, that of plagiarism. Plagiarism is the act of taking someone else’s work, ideas, or words and presenting them as your own. It’s a form of intellectual theft and can have serious consequences, including invalidating qualifications or exam results for students who are caught.   “Taking someone else’s work” and “presenting as your own”;  But if I read something, agree with it, and then present it as my viewpoint haven’t I just taken someone else’s work and presented as my own?    Does writing it in my own words make it original and my own contribution and at what point?    How many words do you need to change before it becomes my original contribution as opposed to plagiarism?   I note that the plagiarism detection services I have used in the past present a plagiarism score which tries to quantify how similar a piece of work is to other pieces of student work on file.  And if I combine with readings from other sources is this better or just plagiarising from a number of sources?     And what if I get AI to write the first draft of the content, then I refine it?    Is this plagiarising from the multiple sources the AI used as training data or simply plagiarising from the AI, or maybe it isn’t plagiarism at all? Considering art work rather than writing, if I get an AI to produce a self portrait of Van Gogh but painted in the style of Monet, who have I plagiarised?

I don’t believe the concept of originality and of plagiarism, beyond plagiarism of a paragraph of cut and paste text, was ever an easy issue in schools albeit we have treated it as easy in the past.   With AI this issue becomes that bit more complex and difficult to traverse.    We may present our students with the assessment and with a marks scheme, but do we need to start providing more discussion in relation to originality, and what acceptable use of AI platforms might look like?    I suppose the challenge here is do we know what this might look like.    

But a bigger question may be why we ask for these written assessments to be completed in the first place;   Is the written work a proxy for evidence of learning and understanding, where this is easier, and possibly more reliable, than actually having a discussion with each and every student to check their understanding?   And if we can no longer rely to the same extent on the piece of extended written work do we need to move to more student/teacher discussions, but if so, how will we address bias and other factors impacting on individual teacher assessment of students?

Conclusion?

Am not sure the above has presented any answers beyond presenting some of my musings and more questions.    But for now that maybe enough, to try and add to the discussion in relation to education and how it may look in the future given effective AI solutions are already available to our students.

References

Written with the help of ChatGPT (OpenAI)

Technology and Exam boards: Time to modernise?

I recently received a request from a teacher in relation to getting some software installed on their school device to support them in marking for an exam board.    Now I know this isn’t part of their school role however having been a standards moderator in the past, I understand the benefits to schools and colleges of having markers or moderators within teaching departments.   I am therefore eager to try and enable staff by supporting such requests however this request involved a piece of software which requires admin rights to the laptop, both for install and for the operation of the application according to the exam board.   When the concern re: cyber security was raised the exam boards final reply was that the staff member should install the software on a personal rather than school laptop.   This got me thinking about how technology has changed but how exam boards have been slow to change.   This is all the more evident currently.   Just look at the advances in Large Language Models (LLMs) with ChatGPT over the last six months.

Traditionally, examination boards have relied on paper-based tests and manual grading systems. However, these methods have several drawbacks, including the potential for errors and delays in results processing.    One way examination boards could modernize is by moving towards computer-based testing. Computer-based testing allows for faster and more accurate grading, as well as the ability to customize exams to the specific needs of each student.  I very much believe that adaptive testing is the way forward, with this also enabling students to take exams in their own time when they are ready as opposed to at a set time with all other students.   Adaptive testing also supports students taking their tests anywhere, including at home, rather than having to be crammed into a large exam hall where the conditions themselves are not exactly designed for optimum student performance.    Additionally the results would be available much quicker reducing the stress associated with a long waiting period between the exams and the results being released.   There is also the potential benefit in the reduction in the amount of paper used in exams, transporting of these papers, etc, which may help with making the exam process more environmentally friendly.

Another way examination boards can modernize is by utilizing artificial intelligence (AI) in the grading process. This appears all the more relevant at the moment with development in LLMs like Chat-GPT.   AI-powered grading systems can quickly and accurately grade exams, allowing for quicker results processing and reducing the potential for errors. AI can also analyse student performance data to provide insights into areas where students may need additional support and guidance.   Now I note here that the use of AI may introduce new errors to the marking process however I would suggest that the volume or magnitude of these errors when compared with human based marking is likely to be lower.  It isn’t “the solution” to errors but definitely a step in the correct direct.

Related to the above, exam boards need to acknowledge the existence of AI and LLMs and the fact that they will become an increasing part of life and therefore a tool which students will increasingly use in their studies be it for revision, to help in developing critical thought or for creating coursework or other learning content.   So far only IB (International Baccalaureate) have really acknowledged ChatGPT and how they see it impacting on their courses, providing at least some steer for schools on what appropriate or inappropriate usage might look like and proving at least some direction for schools and teachers for managing these new technologies.

Moreover, examination boards can use technology to improve exam security. Online proctoring tools can help ensure that students are taking exams in a secure and controlled environment, preventing cheating and other forms of academic dishonesty.    Related to this, I have seen exam boards continuing to send out resources on CDs or USB drives, or requesting student video or audio work using similar formats.   It is about time that they provided appropriate online portals to allow the quick, efficient and secure transfer of such exam and coursework data.  

Finally, examination boards can use technology to make their exams more accessible to students with disabilities or special needs. For example, screen readers, text-to-speech software, and other assistive technologies can help students with visual or hearing impairments to take exams on an equal footing with their peers.   This is already happening for a subset of students however I suspect eventually will need to acknowledge that all students are individual and having differing learning preferences including their device use and the online tools they use.  In classrooms teachers support students using a range of tools and techniques so it is only correct to seek to support the same in the final exams which are, at least for now, viewed as so critical in a students format education.   As such examination boards will need to adapt to this.

Conclusion

Technology has the potential to revolutionize the examination process, making it more efficient, accurate, and accessible. Examination boards must embrace these technological advancements to ensure that their exams are of the highest quality and that students receive accurate and timely results. By doing so, they can help prepare the next generation of students for success in a rapidly changing digital world.   

And at a time when the pace of technology, particularly in relation to Artificial Intelligence solutions, has never been faster, the exam boards will need to significantly increase their agility and their ability to adapt to and embrace change.

AI and Learning Platforms

Software learning platforms which come complete with learning content for students to work through are not new.   I remember an online Maths programme from my days as a university student as I was studying to become a teacher back in the late 90’s.   Basically, you worked through content and then were presented with different options as to how you progressed through the programme.    As a learner the individual modules of content were pretty much fixed, having been written into the software, but the path through the wider programme of learning was up to me.    I was provided options as to how I progressed from one module to the next.   Now, I was never a great fan of this as each module was presented in a given way and worked through examples in a given way, as it was programmed to do.  If you didn’t understand the way it was presented then there was no help or way to progress through this module although you could move to further modules in the hope they would provide you with insight which might eventually get you past this issue.    I liked the idea of online programmes and self paced learning however had concerns about user motivation, especially when you hit concepts which provide difficult for you to understand, and about the fixed nature of the content materials;   A great teacher adjusts and customises their learning materials and approach to their class and the individual students within it.   As such the self paced learning aspect was a step forward but this was about as far as it goes.

Fast forward to more recently and little progress had been made, at least as far as I saw it.   Newer learning platforms are capable of gather much more diagnostic data and analytics which allow the developers and content writers to adjust and improve their content.   So, the content is better than the content I experienced in the 90’s but generally it still provides largely linear and fixed content and if the content, its style, etc don’t match your needs then there is little that can be done.   As so, until very recently I have had a largely negative view of learning platforms which come complete with the vendors own content which teachers cannot adjust or customise to their content.   They have their place for example supplementary to classroom teaching or self paced learning when teachers are absent but that was it.

That was until recently when I saw a video of some new developments within the Khan Academy platform including its new use of the GPT4 Large Language Model (LLM).    Still the content in terms of problems set within the platform and the way they are worked through appears very linear and fixed.  So if it is maths problems it will work through the problem in a specific way;  no change there.   The difference, and the massive leap forward in terms of learning platforms is their new chat bot style assistant.   It prompts and supports the student using the platform.   It identifies common misconceptions and provides guidance.   It acts as a coach and facilitator but customising its responses to the efforts being made by the student using the platform and this includes providing motivational “well dones” and corresponding emojis.    Watching the demo it was almost as if there was a teacher sat behind the chatbot rather than an AI solution.    Now I note that this demo was short and was for the purposes of showing off what is possible in the Khan Academy platform so may not be fully representative of how it all looks and feels in real life, however if the final product is anything close to this then it is a major shift forward.

Flipped learning has been a concept long discussed looking at releasing teachers from supporting students practice of learning concepts however maybe AI solutions like GPT4 and its use in Khan Academy will allow us to release teachers from more of the basic learning.    Maybe the AI and learning platform can be used here, allowing teachers to act more as facilitators rather than delivering new learning, and allowing them to focus much more on the high order skills of creativity, critical thinking and the like.    

AI and large language models could potentially facilitate significant shifts in what learning in our schools and colleges looks like, not in the distant future, but in the very near future indeed.

ChatGPT and IT Services

I recently wrote an article for the ANME on ChatGPT and on the benefits but also risks.   You can read this here.   My view is that AI models like ChatGPT are going to become all the more common and also more and more accurate, and therefore we need to explore them and identify how they might be positively used within education.   Seeking to block their use is, in my opinion, guaranteed to fail.   

Following my post, I saw a reply on twitter to the article with ChatGPTs view on AI and education.  You can see this here.    It picked up a couple of points which I hadnt included in my piece and I note that some of my piece actually included content generated by ChatGPT itself.    It wasn’t obvious that ChatGPT had a hand in both pieces which suggests it wont be easy to identify where ChatGPT is used.

All this got me thinking about how ChatGPT might benefit IT Services and the IT teams particularly in schools.   As such I gave some quick thoughts as to possible uses cases, which I have outlined below:

User guides and Help

ChatGPT can be used to create a knowledge base of information that can be easily accessed by IT staff and other school personnel including simple user and help guides.  This seems like the most obvious and easiest use of ChatGPT;  I have already tried asking it some questions in relation to iPad related issues and its responses were clear and accurate.

Creating software and other solutions

Where schools are creating their own internal software solutions including website solutions, ChatGPT can help with the basic code building blocks, thereby speeding up development.   It will still require human input to finalise the projects and add that bit of creativity and flair however ChatGPT can get us part of the way there, thereby saving time and resources.

Policies, processes and procedure documentation

Writing policy and process documentation can quite often be a long and laborious job but ChatGPT and other AI language models can quickly put together a basic document which human staff can then refine and customise to fit the school.

Chatbots

ChatGPT can be used to create a chatbot that can interact with students and staff, answering questions and providing information.   This therefore allows IT support staff to focus on more complex issues or more strategic tasks.

Language Translation

Where schools include non-English speaking students ChatGPT can be used to assist IT support staff in communicating with non-English speaking students and families by providing translations in real-time.

Process automation

A number of the above relate to process automation where ChatGPT is used to automate common support tasks, such as answering frequently asked questions, troubleshooting basic technical issues, and providing instructions for software and hardware.   There are likely other areas where simple processes can be automated through the ChatGPT or other AI Language models.

Conclusion

I think one of the key conclusions I arrive at from my thinking is not related to the benefit of using ChatGPT, or other AI language models, in itself, but for the potential for ChatGPT and a human user to work together.   This hybrid approach of AI and human is, in my view, the way forward as both complement each other.  The AI solution can easily do the basic and repeatable parts of a task, such as creating a user guide, while the human can bring that flair and creativity to make such guides engaging, accessible and usable.    It isnt a case of ChatGPT or humans, or ChatGPT replacing humans.

I suspect there are many other applications of ChatGPT within an IT Support or IT Services capacity which are yet to be realised and I look forward to finding out more in terms of how AI Language Models can enable IT staff to deliver, enhance and even redefine the services provided to users in schools and colleges, and to the communities they serve.

These are interesting times!

Its only Artificial Intelligence!

Meta released a chatbot for use in the US where its responses are based on internet based data.   It wasn’t long before the chatbot was being less than positive about Meta’s CEO Mark Zuckerberg.   Overall, a bit of a novelty but it might also give us a little bit of insight into the Artificial Intelligence or Machine Learning algorithms which underpin an increasing number of the services we use online.

It is highly unlikely that Meta specifically programmed their chat bot to suggest that the CEO did “a terrible job in testifying before congress” however this is the feedback it provided upon being asked “what did you think of mark Zuckerberg”.    This response is likely the result of the chatbot analysing data sources on the internet and identifying this response as most likely to be true, or at least true in the perceptions of those sharing their thoughts online.   So here we see a couple of problems:

  1. As users and even developers, we will not necessarily be able to identify how the response was arrived at.   It’s a black box system;  We can see the inputs and the outputs but not the process.    Considering this should make us a little bit nervous as, especially for important decisions, it would be nice to understand how the answer an algorithm provides was arrived at.   Imagine an AI being used in assessing mortgage applications;  How would you feel if no-one can example why your application was refused?    From a user point of view, as a black box system, there is also the danger that the service provider does have control over the algorithm and therefore can directly influence and control feedback to suit their own needs.  In this case the black box system provides a smoke screen for potentially unethical practices.
  2. The chatbot repeats what it sees to be true or the commonly held belief, based on the data sources it accesses.  Bias could easily be introduced here through the internet sources which the chatbot is provided access to or through the queries it might use in identifying pertinent information.   We should be naturally questioning of a solution which may be inherently biased.   One example of this is the issues surrounding facial recognition where the AI was trained largely on white rather than coloured faces, due to the predominant skin colour among those developing the AI solution.  As such we ended up with AIs which did a poorer job of facial recognition when presented with faces with non-white skin colour.
  3. Again, relating to the repetition of commonly held belief, the chatbot may simply act as an echo chamber for commonly held beliefs, disregarding minority views.    And if a number of chatbots were to be used together they might be able to powerfully shape the truth on social media channels through repeatedly posting.

Some of the above is of concern but then I start to think about the alternative and a human rather than AI based system.    Humans are not transparent in their thinking processes although they might seek to explain how they arrived at a solution, we rely on sub-concious influences and decision making processes to which we have no access.    Humans equally like an AI based system may be biased or may seek to service their own needs or the needs of their employer.    And humans also tend towards the likeminded, which therefore creates the echo chambers mentioned above.    So maybe AI is no more problematic than a human based solution.   

Is the challenge therefore that AI is technology rather than a human being like us?   Is it maybe that this difference may influence our feeling of unease or unhappiness with the risks mentioned above, and that we simply accept similar issues in human based processed because, after all, we are “only human”?

AI in schools

I recently read an article discussing how AI might be used in schools from 2025 onwards.   This seems like a reasonably logical bit of future prediction but on reflection I quickly came to identify some concerns.

Firstly, AI can cover a very broad range of activities.   Is it AI designed to interpret natural language such as your Alexa can identify and then respond to you verbal queries, or are we talking about a more general AI solution more akin to Commander Data in Star Trek?    There is quite a gulf between these two extremes, with the 2nd of them likely to be some time off before it is achievable.

If we therefore accept we are looking at using specific focussed AI solutions in schools by 2025 I think they have clearly got the year wrong as we are already doing it now, in 2022.    We have our spell checker and grammar checker in Word, we also now have our transcription tools in Teams and PowerPoint including the ability to offer real time, or near real time, translation of spoken content.  These are all AI or maybe machine learning based solutions being used in schools and colleges, being used by teachers today.   Not 3 years away in 2025, but today.

So, the headline seems on initial inspection to be quite aspirational and inspirational, for teachers to be using artificial intelligence in their classrooms in only 3 years time.   But a more detailed look and we find it isnt so inspirational as we are pretty much already there.   Maybe the headline hints to a greater use of AI or more advanced AIs being used more often and to greater effect but that’s not the way the headline comes across.   Maybe we will use more AI based platforms, such as learning platforms which direct students through personalised learning programmes, although I have some concerns about this too.  Or maybe there will be greater use of AI and machine learning in the setting and marking of both summative and formative assessments.

I suspect AI use in schools will grow between now and 2025.    I suspect it will grow to be more common in general so wont be a school centric thing, however I suspect that a teacher will still be a teacher and the key to teaching and learning, and the use of AI tools, like the current EdTech tools, will be skilled teachers to wield them as and when appropriate in crafting the best possible learning experience for their students.

AI and Bias

I recently saw an article in the guardian regarding a call from an Artificial Intelligence expert to cease using AI in the UK due to concerns that they were “infected with biases” and couldn’t be trusted (McDonald, 2019).

I too have concerns in relation to bias in AI, particularly in relation to AIs as black box systems where we are unable to ascertain how an AI might have arrived at a specific decision.    For example, the guardian article references immigration related applications of AI, so an AI might decide to approve or reject an immigration application based on the data it has available to it.    The danger here, in my view, is the potential lack of transparency in relation to the AIs decision making process.  

Despite my concerns, I however do not advocate banning AI use, as the alternative to using AI is to use human decision making.    Human decision making is far from lacking in bias.   In Sway (2020), by P. Agarwal, the author states “we are all biased – to a certain degree” going on to discuss in detail human bias and particularly unconscious bias.   Agarwal also states that “we cannot erase our biases completely” plus in relation to technology use, suggests that technology solutions, which therefore includes AI, “incorporate the biases from the designers and data engineers” who design them.   As such it doesn’t seem fair to hold AIs up to a standard, that of being absent of bias, when the human designers, users, etc of such systems are themselves unable to achieve this standard.

For me the critical issue is being aware of the bias which may exist and seeking to mitigate and manage the resultant risks.   We have to accept that bias is unavoidable, it is unavoidable in we humans, and also unavoidable in the systems and AIs we may create.    It is due to this need for awareness that my concern regarding the potential lack of transparency arises.

References:

Mcdonald, H. 2019. AI expert calls for end to UK use of ‘racially biased’ algorithms. [Online]. [27 December 2020]. Available from: https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey

Agarwal, P (2020). Sway: Unravelling Unconscious Bias. United Kingdom: Bloomsbury Publishing.

Some thoughts on AI in education

A recent post in the TES got me thinking once again about AI in the schools.   The post focused on parents fears about artificial intelligence use in schools stating 77% of parents expressed a concern over a lack of transparency.

Firstly before I get into my views on AI let me first take some issues with the reporting and with the parental perception part of the research.   Looking at the research which you can find here, the question asked of parents focused on the “consequences of the use of AI”.   This feels a little negatively biased to start with.    Under this banner question a serious of sub-questions were asked with the participants asked to respond with either don’t know, fairly concerned/very concerned or not very concerned/not at all concerned.  Again the options hint towards negativity and therefore introduce bias.   And finally the sub question itself in relation to transparency for example focused on concerns relating to a “lack of transparency”, again a negative implication and further negative bias.     It is also worth noting that the survey only had 1225 parents contributing.    I think this falls very short of a sufficient sample to draw any meaningful and generalisable findings.   Despite all of the above the TES decided to pick up and report the findings of “parents’ fear about artificial intelligence in schools” including indicating an “overwhelming majority of parents are concerned”.   I find it somewhat funny that concern of potential bias in relation to AI was reported in an article itself so loaded with its own bias.

So to my views;  I myself have concerns regarding AI use in schools however also see much potential.   Funnily enough the Nesta report to which the TES referred concludes that AI in education “promises much to be excited about.”

Given the negative bias in the TES report lets therefore start with my positive views as to the potential for AI in education.   AI is very good at identifying patterns and divergence from patterns within large data sets.    This makes them ideal for analysing the wealth of school and wider educational data which exists to help educators, those responsible for educational policy and decision making, school leaders and even the teachers themselves.    Now thoughts may instantly jump to achievement data sets resulting from testing, final exams or teacher awarded grading however the opportunities far exceed this area.   Take for example data taken from school Wi-Fi, where students are allowed access, in relation to student movements around the school.   This data might help a school reorganise the school day or restructure the timetable in order to become more efficient and maximise the learning time available.   It might also be used to redesign learning spaces or develop spaces for students to rest, take a break and address their wellbeing.  This is but one example of how AI might be used along with school data.

AI can help direct students to appropriate learning materials using data to identify the areas where students need additional support along with the best support materials to meet these needs.    Some platforms already exist and are exploring this opportunity including Century, a platform which I heard very positive stories regarding when recently speaking to students at a school using it.   Platforms like this might prove highly valuable additional resources to complement classroom teaching or to provide a more effective homework platform.   This area and use of AI is likely to continue grow with the development of more and more online learning content being key to this.

AI can help with teacher administrative tasks such as registration conducted via facial recognition or marking of tests by natural language AIs that can apply a given marking criteria to student submitted work.    We also need to recognise some of the AIs that are already available including voice recognition and dictation, which is now a feature of the MS office products.    Googles search facilities, a now standard feature used in schools and classrooms, also quietly uses AI yet we don’t bat much of an eyelid to it.

The negatives implications which exist in relation to AI generally apply beyond the educational context, albeit the educational context in teaching our future generations makes things all the more worrying.

AIs need to be taught and to learn with this done using training data sets.   The worry is that bias in the training data set will result in bias in the AIs decision making.    As a result an AI which was developed in the UK, and therefore trained using UK based data, and used successfully in UK schools may not be appropriate for use in schools in Asia or the Middle East due to its decision making being biased towards a UK context.   That said, this same issue would impact on any product or service, or even individuals where they seek to operate outside their normal context.   We all have an inherent bias, we “humans”, create the AIs and train the AIs so is it realistic to expect an AI without bias?  I suspect part of the issue is a concern in relation to a particular bias being introduced purposefully however I think it is more likely bias in AIs will arise accidentally as it general does within humans.

There is a concern that AI decision making based on large data sets may become impossible for humans to explain or understand, as the decision making process could be based on huge amounts of data.     This brings with it the concern that we may lose some of our control.   If a teacher recommends a career track for a student they will be able to explain how they arrived at this however if an AI was used, the teacher may be able to present the AIs findings but may be unable to explain or understand how this was arrived at.   How many parents who be happy with a suggested career path for their child without any explanation available?

Linked to the above is a concern of “determinism” where AI might identify an end point and then through its actions lead to this occurring.  So those students identified as achieving a C grade in GCSE might be presented with content and learning materials which lead them to achieve exactly that.  This concern is again about a lack of control however it could be suggested we are deterministic in some of the practices already in use widely in schools.   Take for example the setting of students into ability bands, is this not potentially deterministic as the students in the top band get the most challenging content which may enable them to achieve top grades while the students in the lowest band gets easier materials which means the don’t learn the more complex materials, and as a result are unable to achieve the top grades.    Also is there a danger of determinism every time a teacher reports a predicted grade to parents or where a school uses ALIS or other benchmarking data?

Overall AI is going to find increasing uses in schools.   My gut feeling however is that for the foreseeable future this will be very much in a subtle way as data analysis systems start to suggest areas to investigate within school data, accessibility tools including dictation and translation support students in class and AI driven learning platforms provide personalised learning opportunities beyond the classroom.   These are but a few examples of things already happening now.  These uses of AI are likely to become more common.   Discussion of AI reminds of a quote in relation to effective technology integration being such that the teacher and learners don’t even stop to think about the fact they are using tech, the tech use is transparent.   I think AI use is going to be exactly this, and the AI in Googles search goes some way to provide this;  When was the last time when you were conducting an online search that you stopped to think about how google search works and how AI may be involved?

 

 

 

 

Future Gazing: Artificial Intelligence (AI)

The phrase “future gazing” has came up recently so I thought it worth sharing some thoughts on the future of EdTech as I see them.     As such I intend to share a series of separate posts on different technologies which might have an impact on education in the years ahead.

Artificial Intelligence

This is a big topic in the wider IT world but also increasingly in education.   The challenge is that AI covers a multitude of sins plus the application of the different AIs are substantial.

The holy grail of AI, as I see it, is the general-purpose AI.   Am not going to spend any real time in this area as this is, in my opinion, some way off.    When it does become a realisation, there is great potent

ial for it to be used in education to supplement teaching staff both as a virtual teacher, a virtual classroom assistant or a virtual coach or mentor.     As I said however, this is some years off.

More specific purpose AIs are much more likely to make an appearance in the short term.   An example of this might be a Mathematics AI which students can pose questions to in natural language, and that will then either provide answers or direct students to appropriate learning materials.   This isn’t that far off and is being used already on organisations help pages.    It just hasn’t thus far been focused on education.

Another application of AI might be in its ability to recognise emotions and activities of students.   This is already in trial in China.   Basically, this involves a classroom camera and an AI which analyses the facial expressions of students along with what they are doing.   This information is then fed back to the teacher to inform learning.    The teacher will get information on the students which appear confused or upset, indicating possibly they are struggling with the materials, along with data on which pupils have been busy with the work, which have been raising their hands to ask questions or provide answers and those which have been more disengaged or not participating.    From this the teacher can then decide how to change the learning activities, target questions or revisit concepts.   I suspect this AI could also be expanded to look at teacher questioning and provide feedback and advice on the types of questions being asked, the frequency and who the questions are directed to.  It might also look at the engagement of students throughout the school day to try and identify trends and develop a structure for the school day which is more in line with the physiological and psychological needs of students.

School data analysis is one area where I think AI is very close to being usable widely in schools.    Schools already are sat on a wealth of data in terms of the student academic data, student demographic data and pastoral data among others.    AI or machine learning can easily analyse the data and identify patterns which humans may not be able to identify.     At a school level this can easily be applied to summative academic results, identifying how different student groups perform, allowing comparisons across subjects, etc, however as we gather more and more formative data these AIs will then be able to feedback to teachers in relation to areas which students do or do not understand.   It will also be able to identify whether there is a pattern across different teachers therefore suggesting a change to how a particular topic is taught, or whether it relates to a group of students or to specific related topics.

In the wider school there will also be opportunities for use of AI.   In the dining hall for example AI might be able to examine data to identify possible lunch timings to improve efficiency.    Analysis of book titles taken from the Library might help in providing a window into pupil preferences and interests.    AI may have the ability to examine parents evenings and parents meetings to try and streamline these events and ensure everyone gets to see who they need to see with a minimal period of waiting.    Machine learning may be able to examine teacher performance management data and identify opportunities for peer support and peer learning to occur, or to identify cross school professional development needs.   Facilities use might be analysed to identify when they are under-utilised and then seek to make them available to the local community.    Teacher work days might be optimised through AI recommendations resulting from an analysis of our working habits looking at when we tend to send emails, our timetable, who we commonly meet with, etc.   These are just some of the ways in which AI may makes its way into our school.

Artificial Intelligence is going to make an increasing appearance in schools.  I think this is inevitable.   In actual fact I would say to some extent AI or Machine Learning is already in schools possibly in the schools firewall or mail filtering solutions or in the network infrastructure.    Going forward however it will become much more visible as it enters more areas of school life.       Or maybe like all good technology use, may become more common yet will be transparent in its use, with users unaware of where AI is providing help, support and guidance.

I think the general-purpose AI, the Data from Star Trek TNG, or HAL for 2001: A Space Odyssey is some way off.   In the first instance AI will provide hints and tips as well as other low-level recommendations or suggestions.     It is to this, and the possible productivity and efficiency gains that may result, that we should therefore first look.