A recent post in the TES got me thinking once again about AI in the schools. The post focused on parents fears about artificial intelligence use in schools stating 77% of parents expressed a concern over a lack of transparency.
Firstly before I get into my views on AI let me first take some issues with the reporting and with the parental perception part of the research. Looking at the research which you can find here, the question asked of parents focused on the “consequences of the use of AI”. This feels a little negatively biased to start with. Under this banner question a serious of sub-questions were asked with the participants asked to respond with either don’t know, fairly concerned/very concerned or not very concerned/not at all concerned. Again the options hint towards negativity and therefore introduce bias. And finally the sub question itself in relation to transparency for example focused on concerns relating to a “lack of transparency”, again a negative implication and further negative bias. It is also worth noting that the survey only had 1225 parents contributing. I think this falls very short of a sufficient sample to draw any meaningful and generalisable findings. Despite all of the above the TES decided to pick up and report the findings of “parents’ fear about artificial intelligence in schools” including indicating an “overwhelming majority of parents are concerned”. I find it somewhat funny that concern of potential bias in relation to AI was reported in an article itself so loaded with its own bias.
So to my views; I myself have concerns regarding AI use in schools however also see much potential. Funnily enough the Nesta report to which the TES referred concludes that AI in education “promises much to be excited about.”
Given the negative bias in the TES report lets therefore start with my positive views as to the potential for AI in education. AI is very good at identifying patterns and divergence from patterns within large data sets. This makes them ideal for analysing the wealth of school and wider educational data which exists to help educators, those responsible for educational policy and decision making, school leaders and even the teachers themselves. Now thoughts may instantly jump to achievement data sets resulting from testing, final exams or teacher awarded grading however the opportunities far exceed this area. Take for example data taken from school Wi-Fi, where students are allowed access, in relation to student movements around the school. This data might help a school reorganise the school day or restructure the timetable in order to become more efficient and maximise the learning time available. It might also be used to redesign learning spaces or develop spaces for students to rest, take a break and address their wellbeing. This is but one example of how AI might be used along with school data.
AI can help direct students to appropriate learning materials using data to identify the areas where students need additional support along with the best support materials to meet these needs. Some platforms already exist and are exploring this opportunity including Century, a platform which I heard very positive stories regarding when recently speaking to students at a school using it. Platforms like this might prove highly valuable additional resources to complement classroom teaching or to provide a more effective homework platform. This area and use of AI is likely to continue grow with the development of more and more online learning content being key to this.
AI can help with teacher administrative tasks such as registration conducted via facial recognition or marking of tests by natural language AIs that can apply a given marking criteria to student submitted work. We also need to recognise some of the AIs that are already available including voice recognition and dictation, which is now a feature of the MS office products. Googles search facilities, a now standard feature used in schools and classrooms, also quietly uses AI yet we don’t bat much of an eyelid to it.
The negatives implications which exist in relation to AI generally apply beyond the educational context, albeit the educational context in teaching our future generations makes things all the more worrying.
AIs need to be taught and to learn with this done using training data sets. The worry is that bias in the training data set will result in bias in the AIs decision making. As a result an AI which was developed in the UK, and therefore trained using UK based data, and used successfully in UK schools may not be appropriate for use in schools in Asia or the Middle East due to its decision making being biased towards a UK context. That said, this same issue would impact on any product or service, or even individuals where they seek to operate outside their normal context. We all have an inherent bias, we “humans”, create the AIs and train the AIs so is it realistic to expect an AI without bias? I suspect part of the issue is a concern in relation to a particular bias being introduced purposefully however I think it is more likely bias in AIs will arise accidentally as it general does within humans.
There is a concern that AI decision making based on large data sets may become impossible for humans to explain or understand, as the decision making process could be based on huge amounts of data. This brings with it the concern that we may lose some of our control. If a teacher recommends a career track for a student they will be able to explain how they arrived at this however if an AI was used, the teacher may be able to present the AIs findings but may be unable to explain or understand how this was arrived at. How many parents who be happy with a suggested career path for their child without any explanation available?
Linked to the above is a concern of “determinism” where AI might identify an end point and then through its actions lead to this occurring. So those students identified as achieving a C grade in GCSE might be presented with content and learning materials which lead them to achieve exactly that. This concern is again about a lack of control however it could be suggested we are deterministic in some of the practices already in use widely in schools. Take for example the setting of students into ability bands, is this not potentially deterministic as the students in the top band get the most challenging content which may enable them to achieve top grades while the students in the lowest band gets easier materials which means the don’t learn the more complex materials, and as a result are unable to achieve the top grades. Also is there a danger of determinism every time a teacher reports a predicted grade to parents or where a school uses ALIS or other benchmarking data?
Overall AI is going to find increasing uses in schools. My gut feeling however is that for the foreseeable future this will be very much in a subtle way as data analysis systems start to suggest areas to investigate within school data, accessibility tools including dictation and translation support students in class and AI driven learning platforms provide personalised learning opportunities beyond the classroom. These are but a few examples of things already happening now. These uses of AI are likely to become more common. Discussion of AI reminds of a quote in relation to effective technology integration being such that the teacher and learners don’t even stop to think about the fact they are using tech, the tech use is transparent. I think AI use is going to be exactly this, and the AI in Googles search goes some way to provide this; When was the last time when you were conducting an online search that you stopped to think about how google search works and how AI may be involved?