Review of 2022/23 in photos

As another academic year begins I thought I would have a quick look back over the photos I have taken throughout 2022/23 to see what highlights I might be able to pick out.    The below image is some of the highlights:

August 2022 saw me having a family holiday abroad which was a pleasant way to relax and prepare for the year to come.     Following the usual busy first half of the autumn term I found myself visiting Meta’s London offices for an online safety event, the first time I had ever visited their offices, before then travelling up to Birmingham for the Schools and Academies show where Abid Patel presented me with an Irn Bru Xtra just at a point where my supplies of the Bru were running low.    Timing is everything! It was a busy couple of days and a lot of travelling but worthwhile in the end.  Later that month I then led the South West ANME meeting;  I think this was the first ANME meeting I had led.   It was enjoyable to contribute to discussion and to share with other schools from across the Southwest.  It would be nice to see more school involved however the geography of the Southwest makes this challenging.

January saw myself and Ian Stockbridge begin our In Our Humble Opinion (IOHO) podcast after over a year of discussion without getting anything off the ground.   Having started the podcast the Microsoft event in Reading proved an ideal opportunity for Ian to sport his IOHO branded T-Shirt.     March saw me in London for the BETT event, however also using the opportunity for a day off to spend in London, including a quick visit to Madame Tussauds for my selfie with a Stormtrooper.   May saw a trip up to Leeds to present at an Elementary Technology event alongside Kalam from British esports, discussing esports and schools.   A great event albeit my journey up to Leeds wasnt short of my usual challenges with significant train delays. I was then involved in a similar esports session, this time with Tom from British esports, along with a cyber resiliency session at the ISC digital event in June.   It was great to present, but also to be involved in the organising of an ISC digital conference especially given the extended delay between the previous ISC event and this one.   Here’s hoping that the ISC event once again returns as an annual event.

The end of the academic year finished with the 2nd LGfL event in London and a good opportunity to catch up with some of the ANME team, among many others.   I then, as the holiday period began, took a trip with my wife to London for a few days relaxing and exploring London, including engaging in a bit of Morph hunting.   I will admit to finding wandering around London with a limited plan other than to amble around and have a few drinks, very relaxing. The weather was also surprisingly nice which makes all the difference.

To be honest, the photos above are only a small number of highlights representing a busy academic year.   Here’s to 2023/24, new challenges, new opportunities and another positive academic year.  I wander what photos I will have to look back on a year from now?

KCSiE: Filtering and Monitoring

I was recently reviewing the new Keeping Children Safe in Education (KCSiE) update including the main changes which relate to filtering and monitoring.     I noted the specific reference to the need to “regularly review their effectiveness” and also the reference to the DfEs Digital Standards in relation to Filtering and Monitoring where it mentions “Checks should be undertaken from both a safeguarding and IT perspective.”   

The safeguarding perspective

From a safeguarding point of view I suspect the key consideration is whether filtering and monitoring, and the associated processes, keep students safe online.    So are the relevant websites or categories blocked and do relevant staff get alerts and reports which help in identifying unsafe online behaviours at an early stage, whether this is attempting to access blocked sites or in accessing sites which are accessible but considered a risk or indicator, and therefore specifically monitored and reported on.

From safeguarding perspective it is very much about the processes and how we find our about students accessing content which may be of concern, or attempting to access blocked content.   From here it is about what happens next and whether the holistic process from identification via fileting and monitoring, through reporting to responding is effective.   Are our processes effective.

The IT perspective

From an IT perspective, in my view, it is simply a case of whether the filtering and monitoring works.   Now I note here that no filtering and monitoring solution is fool-proof, so I believe it is important to acknowledge that there are unknown risks including new technologies to bypass filtering, use of bring your own network (BYON), etc.    Who would have thought a year ago about the risk of AI solutions to create inappropriate content or to allow students to bypass filtering solutions?

Having acknowledged that no solution is perfect, we then get to testing if our solution works.  Now one tool I have used for this is the checking service from SWGfL which can be accessed here.   It checks against 4 basic areas to see if filtering is working as it should.    

I however wanted to go a little further.   To do this I gathered a list of sites which I deemed as appropriate for filtering, gathering sites for each of the various categories we had considered.   I then put together a simple Python script which would attempt to access each site in turn before outputting whether it was successful or not to a CSV file for review.   The idea was that this script could be executed for different users and on different devices;  E.g. on school classroom computers, on school mobile devices, for different student year groups, etc.     The resultant response, if it matches our expectations for what should be allowed or blocked, allows us to evidence checking of filtering from an IT perspective, plus allows us to identify where there might be any issues and seek to address them.     

You can see the simple script below where it tests for social media site access;  You can simply add further URLs to the list to test them:


import requests

website_url = [

              “https://www.facebook.com”,

              “https://www.twitter.com”,

              “https://www.linkedin.com”

]

f = open(“TestResults.csv”, “w”)

for url in website_url:

              try:

                           request_response = requests.head(url)

                           status_code = request_response.status_code

                           website_is_up = status_code == 200

                           print(website_is_up)

                           f.write(url + “,Accessible” + “\n”)

              except Exception:

                           print(url + ” – Site blocked!”)

                           f.write(url + “,Site blocked!” + “\n”)

f.close()


Now the above may need to be changed depending on how your filtering solution works.   I did consider looking at the URL for our blocked page however as the above worked I didn’t have to.  My approach focused on the return codes however if you do need to work with the an error page URL I suspect this article may be of some help.

Conclusion

Before I used the script for the first time I made sure the DSL was aware;  I didn’t want to cause panic in a test student account which seemed to be hitting lots of inappropriate content over a short period of time, and in sequential order.    The script then provided me with an easy way to check that what I thought was blocked, was being blocked as expected.  As it turned out there were a few anomalies, some relating to settings changes and others to changes to websites and mis-categorisation.    As such, the script proved to be a little more useful than I had initially expected as I had assumed that things worked as I believed they did.  

The script could also be used to test monitoring, by hitting monitored websites and checking to see if the relevant alerts or reported log records are created.  

Hopefully the above is helpful in providing some additional evidence from an IT perspective as to whether filtering and monitoring works as it should.

AI in Education

The other day saw me attend a meeting at the Elementary Technology offices in Leeds, meeting with a number of EdTech legends (and me!) to plan an artifical intelligence (AI) conference event due to occur in October.    The planning event was a brilliant opportunity to discuss all things AI and education with some excellent and varied discussions occurring across two days.   

In thinking about my personal use of AI it became clear to me that my own use is still short of what is possible, where there is such potential for me to make greater use of generative AI solutions in a way that will improve my productivity, my creativity and also hopefully my wellbeing through gains in efficiency.  

As I sat on the train on the way home typing this I considered how I might make better use of AI.   Now I could use it to help me write this post, however this post is very much a personal reflection, where AI cant really help although I may be able to use AI to help adjust and improve the post following initially drafting it. I could also use it to create some interesting images with me in different locations or situations, which although fun to do, is unlikely to enhance my work day significantly.   So, what can AI help me with and how may I create situations where it is easier or more convenient for me to make use of AI?

In drafting emails, policies, reports or other documents I suspect generative AI can certainly help.   Also in relation to the creation of presentations there is potential for the use of Generative AI, with Darren White demonstrating the impressive functionality in Canva in relation to creating both content and design within a presentation.   I suspect I may use this in preparing for some of the talks I am due to give in the year ahead.

The key though to achieving the benefits is in making it easier for me to use AI solutions at the point I need them.   My solution to this is to look to include ChatGPT and Bard along with some other AI tools within my “normal day” collection in MS Edge so that they are instantly opened when I begin my work day, ready to use as and when needed.    I also need to spend a bit of time investigating AI powered plug-ins which can put the functionality right in the browser ready to access.

The potential for AI is significant and the two days of discussion were definitely useful.   I now look forward to the actual conference event on the 3rd of October and to sharing thoughts and ideas with a variety of colleagues in UK schools/colleges and beyond.   

2023 Exam Results: A prediction

And so exam results day once again approaches and I would like to share a psychic prediction: That the newspapers will be filled with headlines as to how A-Level results have fallen when compared with last year. 

Ok, so it isnt so much psychic as based on what we know about the UK exams system.    We know that each year the grade boundaries are adjusted and that the trend pre-pandemic was for grades generally to be increasing year on year.    The ever increasing grades werent necessarily the result of improving educational standards or brighter students, although both of these may or may not be the case, they were the result of a decision taken when setting grade boundaries.    With the student exam scores available, the setting of the grade boundaries decided how many students would get an A*, an A, etc and therefore the headline results.    It’s a bit like the old goal seek lessons I used to teach in relation to spreadsheets.   Using Excel I could ask it what input values I would need to provide in order to attain a given result.    So, looking at exam results, what grade boundaries would I need to set in order to maintain the ever increasing grades but while also avoiding it looking like grade inflation or other manipulation of the results.  Now I note that in generally increasing grades across all subjects, some subjects showed more improvement than others, with some subjects showing dips, but summed across all subjects the results tended to show improvement year on year.

And then we hit the pandemic and teacher assessed grades and the outcry about how an algorithm was adjusting teacher awarded grades into the final grades they achieved.    Students and parents were rightly outraged and this system of adjustment was dropped.   But how is this much different from the adjustment of the grade boundaries as mentioned above?     The answer is quite simply that the teachers and often students and parents were aware of the teacher assessed grades and therefore could quantifiably see the adjustment when compared against the awarded grade.   When looking at the pre-pandemic exams teachers, students and parents don’t have visibility as to what the students grade might have been before adjustments were made to the grade boundaries.    They simply see the adjusted score and adjusted final grade.  Now I note that a large part of the outrage was in relation to how the grade adjustment appeared to impact some schools, areas or other demographics of students more than others, however I would suggest this is also the case when the grade boundaries are set/adjusted, albeit the impact is less obvious, transparent or well know.

So, we now head into the exam results following the period of teacher assessed grades with students back doing in-person exams.    Looking at this from an exam board level, and reading the press as it was after the 2022 exam results, we know that a larger than normal increase was reported over the teacher assessed grade years, with this being put down to teacher assessed grades versus the normal terminal exams.   As such I would predict that the exam boundaries will be set in such a way to make the correction.    I predict the exam boundaries will therefore be set to push exam results downwards although it is unclear how much the results will be pushed down.     It may be that the results are reduced slightly to avoid too much negative press or it may be that a more significant correction is enforced based on the fact that this might be easily explained by the previous teacher assessed grades plus also the lack of proper exams experience held by the students who sat their A-Level exams this time;  remember these students missed out on GCSE exams due to the pandemic.

Conclusion

My prediction is that the exam results stats will be lower than last year but not due to students necessarily doing worse, but due to a decision that the results should be worse given last years apparently more generous results plus the fact these particular students have less exam experience than previous years, pre-pandemic.   I suspect my prediction is all but guaranteed but an interesting question from all of this has to be, is this system fair?   I believe the answer is no, although I am not sure I can currently identify a necessarily fairer system.  But I think in seeking a better system, the first step is to identify the current system isnt necessarily fair.

And one more final thought:  To those students getting their results:   All I can simply say is very well done!  This was the culmination of years’ worth of study and effort, and during a period of great upheaval the world over, unlike anything in my or your history to date.   No matter the grades, you did well for getting through it.   The grades, no matter what they are do not define you, but your effort, your resilience and what you decide to do next, your journey is what really matters.    Well done and all the very best for the future!! 

AI, bias and education

Lots has been written about the risks and challenges in relation to artificial intelligence solutions including the risk of bias.    There hasn’t been so much written that specially explores these risk in relation to the use of artificial intelligence solutions within education.    As such I would like to share some thoughts on this starting specifically with the risk of bias and how this might impact on education, teachers and students.

Bias in AI systems

AI systems will generally be provided with training data which is then used by the system in generating its output.    The quality of this training data therefore has a significant impact on the usefulness of the resulting AI solution.    If we provide the system with biased training data, such as an unrepresentative amount of training data relating to a specific event, group or other category, this will result in a biased output.    An easy example of this relates to the poor ability for AI based facial recognition systems to identify people of colour.   This likely relates to the fact these solutions were created by largely western white individuals who therefore used training data which had an unrepresentative number of western white faces.     The challenge however is that humans tend to be biased, albeit often subconsciously, so therefore there it is almost guaranteed that some bias will be intrinsic in the training data provided, but that this bias may be difficult for us to identify.

So what might the impact be in relation to education?

Recommendation Systems

One of the areas where AI has been used for some time is in recommendation systems such as Google Search or the “you might like” on shopping sites like Amazon.   We will likely see similar systems in education which will recommend subjects or topics for students to study or may even recommend future study paths from secondary into FE and then onwards into HE.    But what if these solutions include bias?  I would suspect a gender bias would be the most likely to occur in the first instance, as the AI solution tries to mirror the real world training data it will have been provided, where the real world itself still continues to be biased, advantaging males over females.    This would also cause a significant problem in relation to how AI systems might respond to individuals which identify as non-binary given there would be little training data relating to non-binary individuals.   What suggestions would it provide when the vast majority of data it has related to males or females only?

Learning Systems

Expanding on recommendation systems, we also will have learning systems which gather data on students as they interact with learning material, providing real time feedback and support, plus guiding students through learning materials specifically selected to meet the needs of the individual student.   It will not be obvious how these systems arrive at their output however this output might include selecting content based on its difficulty or challenge level, or providing support and advice based on the identified needs.   What if there is bias in the training data which leads the AI to tend towards providing overly difficult or overly easy content to a specific subset of users?   Note, this subset of users could be as simple as a gender, users in a specific location or ethnicity, however more likely will be a complex categorisation that we may not fully understand.    The key issue here is that some students would be receiving more or less challenging learning content, or more or less support or advice as a result of biased decision making within the artificial intelligence solution.     How might this impact on students, their learning and their achievement?

Academic stagnation

Again, building on the above, we need to recognise that AI solutions are probability based.   They use the training data provided and then use probability based decision making to identify their outputs and actions.   This use of probability means that output and decisions tend towards the average and the statistically most likely.    In terms of education this might mean that AI solutions will equally tend to reinforce the average so students in a school where previous students have done historically below national average may be supported by AI solutions to achieve similar results, the historical average for the school, even where the individual student ability or even the ability of a given year group is above this national average.    Looked at broadly across all education the world over, AI used in teaching and learning, may tend to focus on a global average, which may disadvantage those who are capable of more than this.   It may lead towards more equitable access to education, but it may also lead to a stagnation as all educational efforts tend towards an average.

Divergence

We touched briefly on this earlier, but it also relates to stagnation and a tendency towards the average.   AI solutions are provided training data and make decisions based on this, so there is a tendency towards an average but what if students diverge from this average?    The lack of data specifically relating to these individuals will mean the AI will tend towards the probable and providing advice or directly students according to how the “average” student might perform, which may be inappropriate for these divergent students.    Consider an AI based learning platforming selecting content and providing advice based on the “average” student but where the student using the system is neuro-divergent?   Is the content and advice likely to be appropriate for these students?   What might the impact on the student, on learning, on their mental health, where being presented with inappropriate learning path ways, support and advice?

Reinforcing Bias

Where AI solutions are generating the learning content themselves based on individual students needs we also need to be conscious of how this might result in the reinforcement of stereotypes and bias.   What if the AI solution has to create an image for a criminal, a nurse, or childminder or lawyer;  Is there the potential for the images the AI presents to reinforce gender, ethnic or other biases which already exist, and therefore which are highly likely to exist in the training data?

Conclusion

Based on the above it is clearly right to consider the above risks.   We need to be conscious of these risks such that we can try to mitigate against them by carefully reviewing the training data being used, and by ongoing review of AI performance.   We also need to consider where in some circumstances it may be necessary to have separate AI solutions, with separate training data, for use in certain situation.    Although these risks need to be considered we also need to remember that in the absence of AI solutions in education, it has been humans which have made these decisions.    And humans aren’t devoid of bias, we just happen to largely be unconscious to it.   It is easier to identify bias, or other incorrect or irrational behaviours in others, including in AI systems, than it is to identify it in ourselves.   We therefore need to be careful to avoid holding AI up to standards that we ourselves have never been able to meet.

I wonder whether in seeking to address bias in AI solutions the first thing we may need to do is step back and acknowledge the extent of our own human bias both individually and collectively.

Defining AI

This week I want to continue the discussion of Artificial Intelligence, posing the difficult question of what AI, by definition, actually is.  

The artificial element of artificial intelligence is reasonably clear in that the intelligence is artificially rather than biologically created.   Programmers were involved in developing software code thereby creating the Artificial Intelligence solution.   AI doesn’t arise out of biological evolutionary processes, although it might be possible to suggest that the ongoing development of AI solutions might be evolutionary.

But what about “intelligence”?  

What is intelligence?    There are differing definitions of intelligence.   A google search yields a definition from Oxford Languages which refers to “the ability to acquire and apply knowledge and skills”.    It would appear clear that an AI solution can acquire knowledge in the form of the data it ingests and in the statistical processing which allows it to infer new knowledge.   We have also seen robotic AI solutions which have learned physical skills like the ability to walk.   So, from this definition it appears that these solutions may show intelligence.   That said, does the AI comprehend the meaning of the text it outputs in response to a prompt?   Does it feel a sense of success, and are feelings and emotions a part of intelligence?    And does it “acquire” this knowledge or is it simply fed it by its designers and users?  Does it also choose what to acquire and what outcomes it wants or does it just do only as its programmed?

Evolutionary intelligence?

Another definition for intelligence, which has a more evolutionary bias, states that “Intelligence can be defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor”.   This links to Darwinism and the survival of the fittest in the benefit towards the actor.    It may be that current AI solutions can solve complex problems, such as identification of patterns and anomalies in huge data sets, however it is also possible to evidence where AI solutions fail at simple tasks we humans find easy, such as object recognition and spatial awareness.    As to the actions of the AI benefiting the actor, if we assume the actor is the AI itself, I am not sure we can evidence this.   How does the AI benefit from completing the task it is set to?    I suppose we could argue that the AI is completing a task for a user and that the user is the actor receiving benefit, or we could suggest that by benefiting the user, the AI as actor is more likely to continue to see use and develop which could be considered an act of self-preservation.   But is the AI conscious of benefit?   Does it even need to be conscious of benefit?  Is it conscious of a need for self-preservation?   But then again are we humans conscious of our own need for self-preservation or the personal gains which may motivate us towards seeming selfless acts?

Mimicry

The issue here for me is that I am not sure we are clear on what we mean by artificial intelligence in that the term intelligence is unclear and may mean different things to different people.    I suspect the term AI is adopted in that AI solutions are able to mimic average human behaviours, such as being able to respond to an email based on its content, being able to analyse data and suggest findings or being able to create a piece of artwork based on the work of others.   We just substitute “mimic some human behaviours” for “intelligence”.   In each case the AI solution may be quicker than we humans or may produce better outputs, based on the averaging of all the training data an AI has been exposed to.  In each case, and due to the training data, the outputs may be subject to inaccuracy and bias;   And maybe this may support the use of the term intelligence in the inaccuracy and bias we display as humans being so clearly mimicked by the AI we create.

A task focus

Looking at the definition of “artificial intelligence” in its entirety, Oxford Reference refers to “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.   This definition, of tasks normally requiring humans seems to fit given without AI it is for humans to respond to emails, create artwork, etc.   So maybe AI is simply a system which can mimic humans in terms of its ability to complete a given task and produce a favourable output.

Conclusion

I think it is important we acknowledge the vagueness of AI as a term.  But then again AI is simply a subset of different types of intelligence including biologically developed as well as human created Intelligence.    And if we struggle in creating a consistently adopted definition for intelligence, it is therefore of little surprise that our definition of AI is no less vague.   But maybe this is all semantics and the focus should simply be on developing solutions which can carry out tasks previously limited to humans, and by extension human intelligence.  

Considering human intelligence one last time, we need to remember that a child may show intelligence in speaking its first words, or learning to stand, meanwhile an adult explaining chaos theory or performing an orchestral piece will also be showing intelligence.    That’s a fairly large range of intelligences.    And it is likely with AI, the range of intelligences will be equally broad with our current AI solutions, including generative AI, being near the infant side of the continuum.

Before finishing I also need to raise the challenge in relation to mimicry of human efforts to complete tasks, where AI may mimic our behaviours all to well.  It shows bias, a lot like humans do.   It also states with confidence facts that are either untrue or have limited supporting evidence, much like humans do.   It is subject to exterior influence though its inputs and training data, again much like humans, and it creates “original” works based on the works of others but without a clear ability to reference all that which it has learned and based its outputs on, again exactly like we humans.   This all represents a challenge where I see people trying to hold AI solutions to a standard that we humans would find difficult or even impossible to achieve.

For now, I think we need to accept the vague definition of AI and for me this is a system which can complete tasks which would normally require some form of human intelligence, where inherently this system also tends to mimic some of the drawbacks of the human intelligence it seeks to copy.  Its not perfect but it will do for now.

References:

https://www.google.com/search?q=definition+intelligence

Artificial intelligence – Oxford Reference

Q&A – What Is Intelligence? (hopkinsmedicine.org)