
I recently was musing on the benefits of general knowledge. A recent conference I attended involved Prof Miles Berry where he talked about Generative AI as being very well-read. I had previously seen a figure of around 2000- 2500 years quoted in terms of the time it would take a human to read all of the content included in the training data provided to GPT 3.5, which in my view makes it very well read indeed. So, I got to wondering if it is this broad base of knowledge which makes generative AI, or at least the large language models so potentially useful for us.
A doctor and AI
Consider, for instance, a medical practitioner. While their expertise lies in diagnosing and treating illnesses, plus their bedside manner and ability to interact with patients and other medical practitioners, their effectiveness as healthcare professionals hinges on a robust understanding of anatomy, physiology, pharmacology, and medical ethics—domains that draw upon general knowledge. Similarly, an engineer relies on principles of mathematics, physics, and material science to design innovative solutions to complex problems. As a professional, we are required to study and learn from this broad body of knowledge through degree programmes and other qualification or certification requirements. But we are inherently human which means just because we have learned something at some point, and successfully navigated a qualification or certification route, doesn’t mean that we will remember or be able to access this information at the point of need. If the medical practitioner therefore uses the AI to assist them initially, they will therefore be drawing on a bigger knowledge base than a human is capable of consuming, plus a knowledge base that doesn’t forget, or fail to remember content at some point learned. The medical practitioner will still apply their experience and knowledge to the resultant output, bringing their human touch to help address the challenges of generative AI (bias, hallucinations, etc) however the use of generative AI to assist would likely make diagnosis quicker and possibly more accurate.
My changing workflow
The above seems to align with my views in relation to workflows I have changed recently to include generative AI. Previously I might have known what I wanted to write and therefore get to writing rather than seeking to use generative AI. Now I realise that, although I know my planned outcome, something which generative AI cannot truly know, no matter how much I adjust and finesse my prompts, generative AI brings to the table a huge amount and breadth of reading I will never be able to achieve. As such, starting out by asking generative AI is a great place to start. It will give you an answer to your prompt but will draw upon a far bigger reservoir of knowledge than you can. You can then refine your prompt based on what you want to achieve, before doing the final edits. It is this early use of generative AI which I think is the main potential for us all. If we use generative AI early in our workflows we both get to our endpoint quicker, plus it also opens us up to thoughts and ideas we might never have considered, due to generative AI’s broader general knowledge. I still point my own personal stamp on the content which is produced, making it hopefully unique to my personal style and personality, but AI provides me with assistance.
Challenges and Considerations
Despite its tremendous potential, the integration of generative AI into everyday life and specialized domains poses several challenges and considerations. Chief among these are concerns regarding the reliability and accuracy of AI-generated content, as well as issues related to bias, ethical considerations, and privacy concerns. I do however note here that the issues of reliability, bias, ethics and privacy are not purely AI problems and are actually human and societal issues, so if a human retains the responsibility for checking and final decision-making, then the issue continues to be that of a human rather than AI issue.
Conclusion
Generative AI stands as a transformative force in harnessing and disseminating general knowledge, empowering individuals with instant access to information, facilitating learning and comprehension, and augmenting domain-specific expertise. It provides a vast repository of knowledge acquired from its training data, which can be used to assist humans and augment their efforts. I note this piece itself was generated with the help of generative AI, and some of the text and ideas contained herein are ones I may not have arrived at myself, plus I doubt I would have completed this post quite so quickly. So, if AI is providing a huge knowledge base and assisting us in terms of getting to our endpoint more quickly, plus opening up alternative lines of thinking, isnt this a good thing?
For education though I suspect the big challenge will be in terms of how much of the resultant work is the students and how much is the generative AI platforms. I wonder though, if the requirement is to produce a given piece of work, does this matter, and if AI helps us get there quicker, do we simply need to expect more and better in a world of generative AI?
I suspect another challenge, which may be for a future post, is the fact that Generative AI is a statistical inference model and doesnt “know” anything, so is it as well read as I have made out? Can you be well read without understanding? But what does it mean to “know” or “understand” something and could it be that our knowledge is just a statistical inference based on experience? I think, on that rather deep question, I will leave this post here for now.