AI and AI and AI

Is AI a danger to education is a question I have recently explored, hopefully trying to present a balanced viewpoint.   This question however has an issue in that it asks about AI, as if AI was a simple item such as a hammer or a screwdriver.   The term AI covers a broad range of solutions, and as soon as you look at the breadth of solutions the question becomes difficult to answer, and in need of more explanation and context.  In effect, the question is akin to asking if vehicles are bad for the environment, without defining vehicles;  Is a bicycle for example bad for the environment?

[Narrow] AI

Although some may associate recent discussions of AI with ChatGPT and Bard, AI solutions have been around for a while, with most of us using some of these solutions regularly.    As I write this my word processor highlights spelling and grammar errors, as well as making suggestions for corrections.   The other day when using Amazon, I browsed through the list of “recommended for you” items which the platform had identified for me based on browsing and previous purchases.   I have made use of Google this morning to search for some content plus have used Google maps to identify the likely travel time for an event I am attending in the week ahead.   Also, when I sat down to use my computer this morning, I made use of biometrics in order to sign in plus used functionality in MS Teams in order to blur my background during a couple of Teams calls.   These are all examples of AI.   Are we worried about these uses?   No, not really, as we have been using them for a while now and they are part of normal life.   I do however note, that as with most things there are some risks and drawbacks but I will leave that for a possible future post.

The examples I give above are all very narrow focus AI solutions.   The AI has been designed for a very specific purpose within a very narrow domain area such as correcting spelling and grammar or identifying probable travel time or identifying the subject/individual on a Teams call then blurring everything which isnt the subject.   The benefits are therefore narrow to the specific purpose of the AI as are the drawbacks and risks.  But it is still AI.

[Generative] AI

Large language model development equally isnt new.  We might consider the ELIZA chatbot as the earliest example dating back to 1966, or if not, Watson dating to 2011.  Either way Large Language Models have been around in one way or another for some time, however ChatGPT, in my view, was a major step forward both in its capabilities but also in being freely available for use.   The key difference between narrow AI and Generative AI is in the fact Generative AI can be used for more general purposes.   You could use ChatGPT to produce a summary of a piece of text, to translate a piece of text, to create some webpage HTML, to generative a marketing campaign and many other purposes across different domains, with the only common factor being it produces text output from text-based prompts.   DALL-E and Midjourney do the same, taking text prompts, but producing images with similar solutions available for audio, video, programming code and much more.  

Generative AI as it is now, however doesn’t understand the outputs it produces.   It doesn’t understand the context of what it produces and it, when it doesn’t know the answer, may simply make it up or present incorrect information.   It has its drawbacks and it is still relatively narrow in terms of its limitations to taking text based prompts and responding based on the data it has been trained with.   It may be considered more “intelligent” than the narrow-focus AI solutions mentioned above but it is way short of human level intelligence, although it will outperform human intelligence in some areas.   It is more akin to dog like intelligence in its limited ability to preform simple repeated actions on request, taking a prompt, wading through the materials its been trained on, and providing an output, be this text, an image, a video, code, etc.   

A [General] I

So far, we have looked at AI as it exists now in narrow focussed AI and generative AI, however in the future we will likely have AI solutions which are closer to our human intelligence and can be used more generally across domains and purposes.    This conjures up images of Commander Data from Star Trek, R2-D2 from Star Wars, HAL from 2001 and the Terminator.   In each case the AI solutions are portrayed to be able to “think” to some extent, making their own decisions and controlling their own actions.    The imagery here alone highlights the perceived challenges in relation to Artificial General Intelligence (AGI) and the tendency to view it as good or potentially evil.   How far into the future we will need to look for AGI is unclear with some thinking the accelerating pace of AI means it is sooner than we would like, while others believe it is further into the future.    My sense is that AGI is still some time off as we don’t truly understand how our own human intelligence works and therefore, if we assume AI solutions are largely based on us as humans, then it is unlikely we can create an intelligence to match our own human, general, intelligence.    Others posit that as we create more complex AI solutions, these solutions will help in improving AI which would then allow it to surpass human capabilities and even create super intelligent AI solutions.   Cue the Terminator and Skynet.     Now again, I suspect when we get to the generation of AGI things will not be as simple as they seem, with all AGI’s not being equal.   I suspect the “general” may see some AGIs designed to operate generally within a given domain, such as health and medicine AGIs, or education AGIs, etc.       

Conclusion

Artificial Intelligence solutions can cover a wide range of solutions with my broad discussion of narrow AI, generative AI and AGI being only three broad categories where other areas exist.   It is therefore difficult to discuss AI in its totality certainly not with much certainty.   Maybe we need to be a little more careful in our discussions in defining the types of AI we are referring to, and this goes for my own writing as well where I have equally been discussing AI in its most general form.

Despite this, my viewpoint still remains the same, that AI solutions are here to stay, and as discussed earlier have actually been around for quite a while.    We need to look to accept this and seek to make the best from the situation, considering carefully how and when to use AI, including generative AI, as well as considering the risks and drawbacks.   As to AGI, and the eventual takeover of the world by our AI overlords, I suspect human intelligence will doom the world before this happens.  I also suspect AI development for the foreseeable future will see AI solutions continue to be narrower and short of the near human intelligence of AGI;  As such we definitely need to consider the implications, risks and dangers of using such AI solutions but we also need to consider the positive potential.

Author: Gary Henderson

Gary Henderson is currently the Director of IT in an Independent school in the UK.Prior to this he worked as the Head of Learning Technologies working with public and private schools across the Middle East.This includes leading the planning and development of IT within a number of new schools opening in the UAE.As a trained teacher with over 15 years working in education his experience includes UK state secondary schools, further education and higher education, as well as experience of various international schools teaching various curricula. This has led him to present at a number of educational conferences in the UK and Middle East.

Leave a comment