A world of cameras

We now live in a world where, if there is a car accident or a fight or something similar everyone reaches for their phone to film it.    No-one, or very few, rush in to help and support, instead the majority whip out their mobile phone, video the event before publishing it online for the world to see, in the hope of going viral.   

A positive spin

This can be helpful in getting news out quickly plus can be useful in terms of evidence of actually what happened, hopefully removing subjective memories from the equation, although as I will mention later things are not quite that simple.    I remember watching a movie which centred upon the use of video footage and a bloke with a handy-cam to unpick the events leading to a terrorist attack.   We now live in a world where everyone pretty much has a camera with them, in their mobile phone, and therefore the chances of doing something criminal and not being recorded are slim, albeit that has just led to a growth in face coverings and hoodies to obscure the identity of those seeking to do ill.   But maybe the common access to phone cameras might discourage some from committing crime in which case that can be seen as another positive.   

But privacy I hear you say

What privacy do we have where we might get caught on the camera of someone we don’t know, and where they might then publish this online for all to see, all without either our knowledge or our permission?   In a world of social media where we publish our own content this happens all the time and we may find ourselves laughing at the person who falls over however how do they feel with our own mistakes captured for eternity online and for the world to watch and laugh at?    Also, what about the videos of what happened where only an excerpt is shared online such that the content shared does not convey the context of the event and instead is purposefully picked to suit a particular narrative?  

At the edges

There is also the issue at the extreme edges of this balance, where individuals post their arguments  with security staff or police online regarding their rights to film in public, or in relation to their right to privacy and not being filmed when involved in a march or demonstration.   To the person stating their rights to film in public, I wonder as to what their aim is in filming where security or police feel the need to challenge, and to someone stating their right to privacy, if they are not doing anything wrong and the footage is only for the purpose of policing and identifying those corrupting free speech, etc. again what is their concern?   Now I know, again, things are not that simple.

Balance and pragmatism

I often cite balance and will do so here, that having mobile phones and the ease of filming and photographing events presents a benefit but it also presents a risk.  The technology is a tool and some will seek to use it constructively whereas others will seek to use it for their own negative ends.    Am not sure what the answer is to this, although my personal feeling is we need to be a bit more pragmatic in terms of what is acceptable and unacceptable, and maybe rather than the law leading the way, it is our national culture which should lead the way in terms of what we consider acceptable and unacceptable.  

I think the key issue is that the video capture isn’t going away, and in fact it is getting better, higher resolution and also easier to edit with AI tools so the challenges are only likely to grow.   And the editing or creation of fake, or synthetic, imagery or footage is a clear and growing concern.It is for this reason that I think this is something we need to talk to students about as part of discussing digital citizenship.   What do they think is acceptable or unacceptable and why and how do we build a world where we, in the vast majority, stay on the acceptable side of the fence?

An AI PC

I was recently provided with a nice new laptop to use where this laptop is billed as an AI PC.   Now the reason for the AI PC moniker is that the chipset included in the PC includes the usual central processing unit (CPU), graphics processing unit (GPU) but also a new neural processing unit (NPU).   The new NPU is basically designed to take on AI based tasks but what difference does this make to a conventional laptop?

Making a charge go further

The key difference is the NPU is designed to take on common AI tasks but to do so at lower power levels than the CPU or GPU can do, where previously they would have needed to take this work on.   The theory therefore is that in a laptop, where battery power is important, by using this new NPU the battery life of the laptop can be increased meaning it will work for longer periods on a single charge.    So, if for example we are using Microsoft Teams and making calls where the background is being blurred or a fake background added, which is an AI task, a laptop with an NPU should be able to outlast a laptop without one, before needing to be recharged.   And looking more long term, I would hope that this might also mean that the overall battery life and therefore lifespan for the device as a whole should be extended, which in schools is an important factor to consider.   Now I note that it’s a little early to tell whether this is actually what happens and I doubt my time with the device and my usage of the device will be definitive in this area but I am looking forward to seeing if there are even signs that this might be the case.

And why would a single key matter?

Now the other key thing which struck me with the AI PC, and I note that this will seem such a minor thing, but in effect is quite notable, is the keyboard.     The laptop I have been given comes with a Microsoft copilot key.   It might not seem that this is that important, coming with a different keyboard, but from my initial few hours playing with the new device it has turned out to be quite important.   Basically, the copilot key allows me to fire up copilot; now I am using the free version in edge rather than the paid for version.    I quickly found myself tapping the copilot key and then speaking my prompts whereas previously I always typed them.   I also found myself using copilot more frequently as it was now simply the tap of a key away.  I note previously I kept ChatGPT and Gemini as default tabs which automatically open in my browser as I was aware that although I understand the power of AI and of Large Language Models (LLMs) I have also built up effective working practices and habits, which don’t involve AI.   I was therefore conscious that I needed to find a way to make the use of an LLM convenient and easy so that I could more easily build the habit of introducing them into my normal workflows, so by having them open automatically I made sure a LLM was never that far away.   That said, the single key on the keyboard seemed to make a difference in my tendency to use generative AI.    It just seemed easier and more convenient when the thought occurred that an LLM would be of use, to tap the key with copilot instantly being fired up, ready for me to type, or better still, and more conveniently, speak my prompt.  

Conclusion

Its rather early in my playing with this new AI PC although I can see some potential related to battery life, although I haven’t seen the evidence to back this up, however the more minor change, of having a copilot key has already had an impact on my workflows.

Sometimes it is the little things that make all the difference, and in this case the little thing happens to be a single key, the copilot key.

AI and Marking

Given the concerns in relation to teacher workload, and you just need to take a quick look at the teacher wellbeing index reports to see this, it is clear that we need to look find solutions to the workload challenge.   Artificial intelligence (AI) is one potential piece of this puzzle although is by no means a silver bullet.    The issue I have come across on a number of occasions is concerns regarding some of the challenges in relation to AI such as inaccuracies.  I avoid talking of hallucinations as it anthropomorphises AI;  The reality is that its probability algorithm just outputted something which was wrong so why cant we simply say AI gets it wrong occasionally.    And we are right to have concerns about where an AI solution might provide inaccurate information, especially where it might relate to the marks given to student work or the feedback provided to parents in relation to a students progress.   But maybe we need to stop for a moment and step back and look at what we do currently.    Are our current human based approaches devoid of errors?

I did a quick look on google scholar and found a piece of AQA research from 2005 looking at Markling reliability and the below is the first line in the conclusion section of the report:

“The literature reviewed has made clear the inherent unreliability associated with assessment in general, and associated with marking in particular”

We are not talking about AI based marking here, we are talking about human based marking of work.     We are by no means the highly accurate marking and assessing machines we convince ourselves we are.     And there are lots of other studies which point to how easily we might be influenced.  I remember one study which focussed on decision making by judges where, when they analysed the timing of different decisions, they found that the proximity to a courtroom lunch break had a statistical impact on judges decisions.   Like marking, we would expect a judges decision to be independent of the time of the decision, and to be consistent, however the evidence suggests this isn’t quite the case.    Other studies have looked at how the sequence which papers are marked in can have an impact on marking, so the marking of a paper following a really good or poor paper, will be impacted by the paper which proceeded it.    Again this points to inconsistency in marking.    Also, that if the same paper is presented to the same marker on different occasions over a period of time, different marks result where if we were so accurate in our marking surely the marks for the same paper should be the same.

It seems clear to me that we are not as accurate in our marking and assessment decisions as we possibly think we are.   I suspect, calling out AIs inaccuracies is also easier than calling out our own human inaccuracy, as AI doesn’t argue back or try to justify its errors, in terms of justifying to us, or even internally justifying how the errors are valid to itself.  And this is where a significant part of the challenge is, in that we justify and convince ourselves of our accuracy and consistency, where any objective study would show we aren’t as good as we think we are.   When presented with such quantifiable evidence, we then proceed to generate narratives and explanations to justify or explain away any errors or inconsistencies, so overall our perception of our own human ability to assess and mark student work is therefore that we are very good and accurate at it.  AI doesn’t engage in such self-delusion.

Conclusion

In seeking to address workload and in considering the use of AI in this process we need to be cautious of wanting to get things 100% right.   Yes, this is our ideal solution but our current process is far from 100% right so surely we need only be able to match our current accuracy levels but with a reduced workload for teachers.    Now it may be that the AQA research may present the answer in that “a pragmatic and effective way of improving marking reliability might be to have each script marked by a human marker and by software”.   Maybe rather than looking for AI to do the marking for us, it is about working with AI to do the marking, using it as an assistant but ensuring human insight and checking is part of the process.

And I also note that the above applies not just to the marking of student work but also to the use of generative AI in the creation of parental reports, another area of significant workload for teachers.   Here also an approach of accepting the frailties of our current approach then seeking to use AI to achieve at least the same level of consistency while reducing workload seems appropriate.

Maybe we need to stop taking about Artificial Intelligence and talk more about using AI to create Intelligent Assistants (IA)?

References:

A Review of the literature on marking and reliability (2005), Meadows. M. and Billington. L., National Assessment Agency, AQA

Is using AI cheating?

Ever since ChatGPT burst onto the scene in November 2022 there have been various people in education citing concerns related to how LLMs such as ChatGPT, Gemini, Claude, etc might be misused by students.    But to misuse AI, it must therefore be possible to use AI where I feel the sense is that students should be prevented from using, and who decides what is an appropriate or inappropriate use?   Those invested in change and evolution, who may understand AI, its benefits and risks, or those invested in retaining the status quo with limited understanding or exercise of using AI, let alone using it in a classroom?

Concerns, concerns and more concerns

Concerns have been raised regarding student plagiarism and cheating where students might use generative AI to complete assignments, tests, or essays, undermining the authenticity of their work and misrepresenting their “true” abilities.   This in itself is interesting in ascertaining our “true” abilities.     My spelling and grammar needs work but through spelling and grammar checkers it appears better than it is, but given such checkers are so common does this matter when writing an assignment, blog post or other piece of content?    And does a piece of written coursework or an exam expose the “true” abilities of students, or is it simply a convenient proxy?    Concerns have also been raised in relation to dependency and over-reliance on AI tools which may hinder the development of critical thinking and problem-solving skills if students use them to bypass challenging tasks.   But in a world of search engines and suggestion algorithms suggesting our TV, shopping and music habits is this dependency or simply about convenience?   Access Disparities and digital divides have also been raised given not all students have equal access to generative AI tools, leading to disparities in academic performance and opportunities.  And I suspect this is the most troublesome of the concerns, where the argument regarding the issues some perceive with generative AI may simply fuel an increasing divide between those who can and do use generative AI and those who cant or won’t.

Solution or not?

In relation to assessment some have therefore suggested that the best solution is for simple pen and paper based assessment to be brought back.   I am not sure how this would work as students could still use generative AI to create their coursework before simply copying it by hand.   It also feels a bit like a “we’ve always done it this way”.

AI detection tools have been suggested however I simply don’t believe these are ever going to be reliable.   The key aim for generative AI is to create content which looks like it was created by a human so this will result in a race between AI vendors and the those creating the AI detectors, with only one likely to win that race.    And it ain’t the vendors providing the AI detectors (or the schools spending on money on said detectors). Oh and lets not forget the poor students who will be accused of cheating just because their writing style is highly typical and therefore falsely flagged by these so called AI detectors.

But maybe we need to take a step back and ask ourselves what is the purpose of education and of assessment?  

What is education about and why assess?

If part of the purpose of education is to provide students with the knowledge, skills and experiences which will allow them to flourish and thrive in the world post compulsory education, then shouldn’t we be looking to provide them with the knowledge, skills and experience in relation to using generative AI?    I can only see the use of generative AI increasing across different job types and careers, as I have seen my own use increase post November 2022.  As such, to me it is clear that we should be engaging and working with students in relation to the proper and effective use of generative AI.

And what is the purpose of assessment?    Is it to test memorisation?   And if so, is this as important in a world of search engines and generative AI?    Or in the case of coursework, is it to test the students ability to apply knowledge or demonstrate skills?    And if this is the case, shouldn’t the students be encouraged to use the tools they have available to them which therefore surely needs to include generative AI?    We now, for example, support the use of calculators in Maths exams and we don’t ban the use of spelling and grammar checkers when creating coursework.     And if a student with a learning difficulty uses technology to level the playing field through allowing them to type or dictate, why should it be different for a second language speaker of English using AI translation tools, or simply any student using generative AI to help them create better work, to get started, to refine or to seek feedback?    Why would we want students to create lesser work than they are capable, when using the tools which are now so widely available to them could allow them to achieve more?   Should we not be empowering students to achieve their very best using the tools readily available to them?

Maybe we need to question our current model for assessment, namely tests and coursework, accepting that in a world of generative AI these are no longer suitable or appropriate.   Focussing on assessing the outcomes, the product such as coursework, is no longer possible as students will all be able to create similar output using generative AI tools, so instead I would suggest we need to look towards exploring and assessing the processes students undertake.  

I also note lots of discussion on teachers using Gen AI to help with the workload challenges, using it to create lesson plans and lesson materials, to help with marking, etc.How is this ok for a teacher but for a student to use the same tools, in largely the same way, it isn’t acceptable?

Time for change, finally?

This does feel like a time where we education, and in particularly assessment, need to change significantly.    Gen AI is here to stay, so how can education, how can we make the most of it, preparing our students and providing them with the skills and experiences need to thrive and flourish?

Reflecting on 2023/24

And so another academic year draws to a close so I thought I would share some initial reflections:

Artificial Intelligence (AI)

AI continued to be a big topic of discussion throughout 2023/24 and saw me speaking to school leaders, teachers but also to school support staff on its potential as well as the risks and challenges.    I think like a lot of Tech AI has those which are heavily engaged and supportive of its use, then a larger body with are unsure or reluctant, followed by those that are anti and against its use.    As such at the moment the impact of AI when viewed generally, may appear less than its potential or what those positive about AI are proclaiming.   For me it is about getting more staff experimenting and finding out how AI can benefit them in schools, often in little and simply ways rather than the flash examples we often see.   Personally, I am slowly introducing greater use of AI into my various workflows and in doing so seeing benefits in time saved but also, and possibly more importantly, on the quality of outputs.    By using generative AI to assist me, AI and I (??) are coming up with more ideas, using a wider vocabulary range, creating better graphics and reducing errors among other things.    Two heads, even if one is a headless AI, are better than one!

Digital Divides

I have already mentioned AI and generative AI but it represents yet another aspect to the issue of digital divides.    Technology, and generative AI has such potential to enable greater creativity, address imbalances such as those related to SEND or to language proficiency, support collaborative and communication and much more.   But you need to have access to the technology, the software, the hardware, the internet bandwidth and more, with this access often the product of a digital strategy or plan, and the relevant budget and finance.   You also need access to support and help, to a culture which embraces the potential of technology and generative AI, at school but also add home and in your local community, friends and colleagues.    The number possible divides between those that have and those that have not is only increasing, and the magnitude of each divide is only widening with each passing day as those that have, experiment, adapt and innovate, while those that have not are held back, continuing to work in ways that are long established, as the world around them changes.

Digital Future Group (DFG), collaboration and sharing

Generative AI advancement is just one indicator of the increasing pace of technology change, with resulting impact on society more broadly.   But how can we keep up with these dizzying advancements and changes?    Can one person keep up with all the apps, the tools, the different approaches?  For me the key here is in approaching the problem collectively and collaboratively rather than individually.   For example, this year has seen the creation and my involvement in the Digital Futures Group, which is all about sharing and networking.   I am so blessed to be part of a group of professionals who operate across different contexts, in different roles and with different skills and experience, across the UK, and I am better for my involvement.   I have also had the pleasure of being involved with the Association of Network Managers in Education (ANME) and also the Independent Schools Council (ISC) Digital Advisory Group.    Each of these organisations has allowed me to network with some amazing people, gaining from their experiences, their values and views, the knowledge and maybe even their humanity, and in a world of increasing use of AI, maybe our humanity, and sharing our humanity is all the more important.

Wellbeing

I think another reflection on the academic year relates to wellbeing, and it may be that this jumps to mind due to a recent presentation at the ANME South West event in relation to wellbeing and IT support staff.    This academic year has for me been a very difficult one personally with a major personal upheaval hitting me in the first term, something that I am not sure I have fully recovered from as we complete the final term.   Actually, thinking about it I don’t think it is about recovery but more about adapting to the changed circumstances I now find myself in.   This has forced me to stop and reflect and in doing so identify a lot of personal practices and habits I have developed which have led to an unbalanced life.   It took a significant life event to make me stop and reflect such that I am now trying to rebalance and establish new habits.   This has also got me thinking about the “be more efficient” narrative and what it means to be a good employee.   I get the concept of being more efficient and therefore doing things quicker or using less energy, etc, but if all this leads to as being asked to do yet more things, surely this isn’t a sustainable model.    Laura Knight talked about resilience and how this isn’t something we should aim for as its ok to being resilient to short term issues, but having to constantly exist in a state of resilience isn’t really living and in all honesty isn’t something we can do for any significant period of time.    For me “efficiency” may suffer similar problems.    And if being efficient is driven by an organisational need for efficiency, so does this mean that to be a good employee I need to be efficient and get more done than others, and if this is the case does it not possibly drive unsustainable hours, stress and workload issues.   So maybe schools and other organisations need to consider what it is to be a good employee, with leaders modelling this and with the expectations clearly espoused.  Maybe we also need to stop and identify what really matters, rather than constantly adding more tasks, more requirements and more considerations to our everyday roles.

Conclusion

This for me has been a year of difficulties but also of a greater sense of community and collaboration.   AI, Digital Divides, Networking and Wellbeing are definitely the four themes which currently stick out for me from what has been a busy academic year, but then again when are academic years in schools or colleges not busy?    Am hoping that 2024/25 will be another positive year and soon enough it will be upon us.   I am going to post in the coming weeks a month by month review of some of my highlights for the year including some photos but for now let me just wish everyone a good holiday period acknowledging that myself and my team, plus many others, particularly IT teams, will actually be working much of the holiday period on IT upgrades and many other things, ahead of the new academic year.    All the best to all.

AI and collaborative planning

The other day I was lucky enough to share my thoughts with a number of schools in relation to the use of Generative Artificial Intelligence within teaching and learning.   My session included a brief introduction into Artificial Intelligence (AI) and Generative Artificial Intelligence, before talking about the benefits and, risks and challenges.    I also talked about a few Generative AI tools.

In delivering the session I spotted that some of the later sessions on the event agenda including time for colleagues from across different schools to get together in their subject area to work collaboratively, to share ideas, resources, etc.    And it was at this point that I saw a clear potential use for generative AI.

We are in that final part of the academic year where many teachers are already considering the 2024/25 academic year and putting their planning into place.   It has always been beneficial to do this collaboratively with other teachers, either in your school, across schools in a Multi-Academy Trust or school group, or even beyond.    The bigger and more diverse the group of teachers sharing the better.    It all fits with my favourite quote, that “the smartest person in the room, is the room”.   So, the more people we are working with collaboratively and sharing ideas with, the smarter we are collectively.   But why does it need to be just about people.   

In the event I spoke at I suggested that staff might want to invite ChatGPT or Gemini into their collaborative planning sessions, getting some input and ideas from Generative AI or they might use Generative AI to help in further developing ideas they themselves have come up.    A Generative AI tool would bring additional ideas from the very broad training data it will have ingested and therefore may propose things a group of teachers might not consider. Ok, so it isn’t another person in the room, but it is another intelligence or it is an intelligent assistant to the people in the room.   Maybe the intelligent assistant, such as ChatGPT, makes each individual “smarter” and then the collaboration of the room again makes people collectively smarter still.

Collaborative planning among teachers is now a largely established habit in schools so maybe we can augment the practice by getting into the habit of bring generative AI into these meetings, these discussions and these collaborative events.     If a room of people is smarter than the individuals, then a room of people, supported by intelligent assistants, supported by generative AI tools, is clearer smarter still.

So, if you are doing any collaborative planning in the weeks ahead, or at the start of the new academic year, are you going to involve Generative AI tools in the process?

AI Virtual Friends

Ever since ChatGPT made an appearance various companies and individuals have sought to make use of generative AI to create the next big app, the next TikTok or Angry Birds Space, or other viral app.   One area which I looked into, was the growth in virtual friend apps.    Initially, my thinking was that virtual friend apps might be a useful tool, particularly for shy students or those who need some guidance or support in relation to social interaction.   A virtual friend might provide this.    As I looked into the various apps available however I quickly became a little worried about some of the apps now available, including to children.    Now my concerns seem to align with a news story I read in relation to a rise in school sexism being attributed to phones plus I have had my own concerns in relation to the various algorithms built into social media platforms and how they seek to keep us glued to their platforms, through fielding the content which they think we want to see.   And remember it works out what we want to see by past history, where sexism and other bias was rife, and based on what people access and, as the saying goes, “sex sells”.   

Below I will just share some of my initial thoughts and I have included some screenshots which some may consider distasteful however I include them to demonstrate some of the issues with the various apps I stumbled upon.

Gender Bias

One thing that was particularly obvious was the gender bias in the branding and advertising of the apps.   From what I saw there was a bias towards apps designed to appeal to males using female imagery although this may be due to apps identifying my gender from tracking info.    Additionally, the imagery used, in both males and females, was very stereotypical from an appearance and from a race point of view.    In terms of being representative, the images were far from a representation of the population, pointing towards an unrealistic body image where which could have a potentially significant impact on impressionable children.   They also tended towards using imagery which appeared to show individuals in their late teens and early twenties which in turn might encourage children to experiment with these apps, even although they may not be the target audience, or at least the apps may not admit this is their targeted audience.

Encouraging potentially unacceptable behaviours

Another concern I had was with apps suggesting that the AI virtual friend would pander to the users every desire and whim.    From the point of view of young children seeking to explore boundaries the provision of an AI friend that might encourage or support this beyond the point of acceptable reason, as based on their age, is a concern.    The world will always come with rules and boundaries and it is important that students are aware, so virtual friends that model the breaking or non-existence of boundaries could encourage risky behaviours in the real world.

NSFW

The term, NSFW, or Not Safe For Work appeared within the adverts for a few of the virtual friend apps and with one app there was even the mention of “Barely Legal”.   Where these apps have few if any safeguards in relation to use by children, this is of significant concern.   It is also noted that the NSFW isn’t meant as a warning but as an enticement and therefore this might encourage young children to try these apps, where the content or even behaviour of the AI chatbot is not age appropriate.

Blurring the boundaries between AI and reality

One of my concerns in looking at the virtual friends apps was the potential for children to become confused and for the boundary between the fake virtual friend and real friends to blur such that the child may act inappropriately in the real world based on activities which a virtual friend was willing to accept in the virtual world.   It was therefore a little worrying to actually find one app using this in their advertising and questioning “what is real anyway?”   

Conclusion

Now I think it is very important to note here that I suspect there are some possible positives which could result from virtual friends such as solutions which direct individuals to support services or provide support and advice themselves.   Or apps that provide friendship or moral support, but within reason.   That said there are equally apps which clearly are focused on baser human tendencies and which seek to make money by playing to this.   These apps, although not necessarily aimed at children, could easily fall into the hands of children with a resultant potential for harm. I didn’t see any evidence of age verification in the apps I looked at.

Now, I didn’t spend a significant time playing with any of the apps, rather just looked to see what apps were available and how they were advertised via social media, so I cannot say much about their potential to hold the attention of users including children however based on my brief investigation the discussion was rather bland;  That said a lot of the text messages I routinely share may be considered bland and it may be that this changes as you provide more data and interact over a longer period, plus as the AI models themselves continue to develop and improve.   I also note that there were so many of these apps, often with apps with different names but from the same provider, and generally these apps were not the product of the big tech companies.   I would suspect some of them may even be the product of cyber criminals seeking to harvest data.

If I was to propose my main concern, it is the two-way nature of the communication.   Up until this point inappropriate content, such as porn, is very much a one-way communication, being the consumption of content.   With AI there is now the potential for this to become two way making it more akin to the normal interactions we may have in our day to day lives.    For children I suspect this could make these apps all the more dangerous as this may impact their norms and views as to what is acceptable.

AI and general knowledge

I recently was musing on the benefits of general knowledge.   A recent conference I attended involved Prof Miles Berry where he talked about Generative AI as being very well-read.   I had previously seen a figure of around 2000- 2500 years quoted in terms of the time it would take a human to read all of the content included in the training data provided to GPT 3.5, which in my view makes it very well read indeed.   So, I got to wondering if it is this broad base of knowledge which makes generative AI, or at least the large language models so potentially useful for us.

A doctor and AI

Consider, for instance, a medical practitioner. While their expertise lies in diagnosing and treating illnesses, plus their bedside manner and ability to interact with patients and other medical practitioners, their effectiveness as healthcare professionals hinges on a robust understanding of anatomy, physiology, pharmacology, and medical ethics—domains that draw upon general knowledge. Similarly, an engineer relies on principles of mathematics, physics, and material science to design innovative solutions to complex problems.    As a professional, we are required to study and learn from this broad body of knowledge through degree programmes and other qualification or certification requirements.   But we are inherently human which means just because we have learned something at some point, and successfully navigated a qualification or certification route, doesn’t mean that we will remember or be able to access this information at the point of need.    If the medical practitioner therefore uses the AI to assist them initially, they will therefore be drawing on a bigger knowledge base than a human is capable of consuming, plus a knowledge base that doesn’t forget, or fail to remember content at some point learned.     The medical practitioner will still apply their experience and knowledge to the resultant output, bringing their human touch to help address the challenges of generative AI (bias, hallucinations, etc) however the use of generative AI to assist would likely make diagnosis quicker and possibly more accurate.

My changing workflow

The above seems to align with my views in relation to workflows I have changed recently to include generative AI.  Previously I might have known what I wanted to write and therefore get to writing rather than seeking to use generative AI.    Now I realise that, although I know my planned outcome, something which generative AI cannot truly know, no matter how much I adjust and finesse my prompts, generative AI brings to the table a huge amount and breadth of reading I will never be able to achieve.    As such, starting out by asking generative AI is a great place to start.    It will give you an answer to your prompt but will draw upon a far bigger reservoir of knowledge than you can.   You can then refine your prompt based on what you want to achieve, before doing the final edits.    It is this early use of generative AI which I think is the main potential for us all.   If we use generative AI early in our workflows we both get to our endpoint quicker, plus it also opens us up to thoughts and ideas we might never have considered, due to generative AI’s broader general knowledge. I still point my own personal stamp on the content which is produced, making it hopefully unique to my personal style and personality, but AI provides me with assistance.

Challenges and Considerations

Despite its tremendous potential, the integration of generative AI into everyday life and specialized domains poses several challenges and considerations. Chief among these are concerns regarding the reliability and accuracy of AI-generated content, as well as issues related to bias, ethical considerations, and privacy concerns. I do however note here that the issues of reliability, bias, ethics and privacy are not purely AI problems and are actually human and societal issues, so if a human retains the responsibility for checking and final decision-making, then the issue continues to be that of a human rather than AI issue.

Conclusion

Generative AI stands as a transformative force in harnessing and disseminating general knowledge, empowering individuals with instant access to information, facilitating learning and comprehension, and augmenting domain-specific expertise.    It provides a vast repository of knowledge acquired from its training data, which can be used to assist humans and augment their efforts.   I note this piece itself was generated with the help of generative AI, and some of the text and ideas contained herein are ones I may not have arrived at myself, plus I doubt I would have completed this post quite so quickly.    So, if AI is providing a huge knowledge base and assisting us in terms of getting to our endpoint more quickly, plus opening up alternative lines of thinking, isnt this a good thing?   

For education though I suspect the big challenge will be in terms of how much of the resultant work is the students and how much is the generative AI platforms.   I wonder though, if the requirement is to produce a given piece of work, does this matter, and if AI helps us get there quicker, do we simply need to expect more and better in a world of generative AI?

I suspect another challenge, which may be for a future post, is the fact that Generative AI is a statistical inference model and doesnt “know” anything, so is it as well read as I have made out? Can you be well read without understanding? But what does it mean to “know” or “understand” something and could it be that our knowledge is just a statistical inference based on experience? I think, on that rather deep question, I will leave this post here for now.

ISC Digital Conference 2024

I once again was privileged to speak at the ISC digital conference the other week, this time as the vice chair for the ISC digital advisory group as opposed to a member.   It was, as it was last year, a very useful and interesting conference, combined with an iconic location in Bletchley park.   I scribbled many notes from the various sessions and therefore wanted to distil those into a couple of key thoughts below.

Prof Miles Berry was his usual barrel of energy in his presentation, putting forward lots of interesting points for consideration.   Following on for the Oxford Academies Business Managers Group (OABMG) conference I attended the other week, Miles certainly was brave in his presentation, opting to actually do a live demonstration to illustrate the potential power of generative AI in terms of helping towards the challenges related to teacher workload.   I have attended so many conferences which discuss AI but it was so nice to actually see it in practice as Miles took a topic from the audience and worked through the creation of content for students, resources, lesson plans, etc., all in the space of minutes, but also highlighting that a teacher at their best could likely do better, but certainly not quicker.   This clearly highlights the efficiency and workload benefits of generative AI, but also the importance of seeing genAI as an assistant to be paired with our own human strengths.

Neelam Parmar then presented on developing an AI curriculum and there was one question which stuck very much with me.   What is AI?    Now why this stuck with me is both the inconsistency in terms of the use of the term and related terms (machine learning, deep learning etc.) but also in terms of the broader question it might hint to in terms of what is intelligence.    Can we accurately and consistently define what we mean by intelligence?    And if we cannot can we truly be confident in creating an intelligence, an artificially created intelligence or AI?    It’s a bit deep, but maybe this is a question we maybe need to consider, as it also hints towards considering the differences between human and artificial intelligence, and therefore the benefits and drawbacks of each.   I do often wonder how different an AI is to human intelligence in terms of how the human brain really works in processing the huge amount of “data” in the experiences and information we consume throughout our lives.   Is the key difference between AI that of emotion and the physical nature of our intelligence in relation to our physical existence?   

The AQA presentation was next up in terms of ideas which stuck with me, helping me feel a bit more positive in terms of where we are in terms of exam board engagement in relation to the use of AI in assessment and in schools.  I will admit to being disappointed that the Polish and Italian trial has been pushed back further to 2027, which I think is too far away, however, I get that it takes everyone to be onboard to move this forward so there are hoops exam boards must go through.  That said there were definitely positive noises in relation to analytical data on outcomes with school data being pulled in, and resulting info pushed back.   This goes to reducing the administrative burden but also to more effective use of the vast amounts of data schools gather.  It was also good to hear of AQA seeking to share a diagnostic tool for Maths;  Tools like this might just help us to find the best way forward in relation to adaptive, diagnostic and even summative testing.

I once again enjoyed hearing Tom Dore talk about esports and the potential benefits for schools adopting this.   It aligned so well with the earlier presentation which highlighted some of the softer skills which the World Economic Forum have identified as important for the future.  It is so much more than simply gaming, but involves communication, leadership, resilience, problem-solving and so much more, plus it often engages students who may be otherwise less engaged.   It was also so good to hear Amy-Louise Cartwright’s approach in her school and how they, albeit in their early stages of development, have already made progress and have plans for the future.  I loved the esports suite they have created, as although we have been involved here in esports for a while we have been using our normal IT labs, albeit with upgraded PCs capable of supporting the relevant esports games.

Conclusion

The ISC digital conference, like so many other conferences, is about getting schools and school staff together and sharing.   This year’s conference did exactly that, and that let me get my piece in as well which was nice.   It was also nice to be at Bletchley Park and its wonderful auditorium.   Now I will note my train ride to and from the venue was far from straightforward, but the trek was worth it, and I look forward to seeing where we stand in a years time, at the 2025 conference.   Will we have progressed significantly, be asking the same questions, or will the challenges have changed or even been addressed?   Only time will tell.

Is Gen AI Dangerous?

I recently saw a webinar being advertised with “Is GenAI dangerous” as the title.   An attention-grabber headline however I don’t think the question is particularly fair.   Is a hammer dangerous?   In the hands of a criminal, I would say it is, plus also in the hands of an amateur DIY’er it might also be dangerous, to the person wielding it but also to others through the things the amateur might build or install.     Are humans dangerous, or is air dangerous?   Again, with questions quite so broad the answer will almost always be “yes” but qualified with “in certain circumstances or in the hands of certain people”.    This got me wondering about the dangers of generative AI and some hopefully better questions we might seek to ask in relation to generative AI use in schools.

Bias

The danger of bias in generative AI solutions is clearly documented, and I have evidenced it myself in simple demonstrations, however, we have also more recently seen the challenges in relation to where companies might seek to manage bias, where this results in equally unwanted outputs.   Maybe we need to accept bias in AI in much the same way that we accept some level of unconscious bias in human beings.    If this is the case then I think the questions we need to ask ourselves are:

  1. How do we build awareness of bias both in AI and in human decision-making and creation?
  2. How do we seek to address bias?   And in generative AI solutions, I think the key here is simply prompt engineering and avoiding broad or vague prompts, in favour of more specific and detailed prompts.

Inaccuracy

I don’t like the term “hallucinations”, which is the commonly used term where AI solutions return incorrect information, preferring to call it an error or inaccuracy.   And we know that humans are prone to mistakes, so this is yet another similarity between humans and AI solutions.   Again, if we accept that there will also be some errors in AI-based outputs, we find ourselves asking what I feel are better questions, such as:

  1. How do we build awareness of possible errors in AI content
  2. How do we build the necessary critical thinking and problem-solving skills to ensure students and teachers can question and check content being provided by AI solutions?

Plagiarism

The issue of students using AI-generated content and submitting it as their own is often discussed in education circles however I note there are lots of benefits in students using AI solutions, particularly for students who experience language or learning barriers.    I also note a recent survey which suggested lots of students are using generative AI solutions anyway, independent of anything their school may or may not have said.    So again, if we accept that some use of AI will occur and that for some this might represent dishonest practice, but for many it will be using AI to level the playfield, what questions could we ask:

  1. How do we build awareness in students and staff as to what is acceptable and what is not acceptable in using AI solutions?
  2. How do we explore or record how students have used AI in their work so we can assess their approach to problems and their thinking processes?

Over-reliance

There is also the concern that, due to the existence of generative AI solutions, we may start to use them to frequently and become over-reliant on them, weakening our ability to create or do tasks without the aid of generative AI.   For me, this is like the old calculator argument in that we need to be able to do basic maths even though calculators are available everywhere.    I can see the need for some basic fundamental learning but with generative AI being so widely available shouldn’t we seek to maximise the benefits which it provides?  So again, what are the questions we may need to ask:

  1. How do we build awareness of the risk of over-reliance?
  2. How do we ensure we maximise the benefit of AI solutions while retaining the benefits of our own human thinking, human emotion, etc?   It’s about seeking to find a balance.

Conclusion

In considering better questions to ask I think the first question is always one about building awareness so maybe the “is GenAI dangerous” webinar may be useful if it seeks to build relevant awareness as to the risks.  We can’t spot a problem if we are not aware of the potential for such a problem to exist. The challenge though is the questions we ask post-awareness, the questions we ask which try to drive us forward such as how we might deal with bias where we identify it, how we might ensure people are critical and questioning such that they sport errors, how we evidence student thinking and processes in using AI and how we maximise both human and AI benefits.  

In considering generative AI I think there is some irony here in that my view is that we need to ask better questions than “Is GenAI dangerous”.    In seeking to use generative AI and to realise its potential in schools and colleges, prompt engineering, which is basically asking the right questions is key so maybe in seeking to assess the benefits and risks of GenAI we need to start by asking better questions.