Esports event, Salford

I recently had the pleasure of presenting on esports at The Lowry Academy, alongside Kalam Neale from the British Esports Federation.    I have long been a believer in the potential for esports to be a positive vehicle for supporting student engagement but also the development of a lot of the soft skills that are important in life beyond school, including leadership, resilience, and teamwork to name but a few.   It was therefore great to be able to share but also even better to hear what the staff and students at The Lowry Academy, alongside 3 other United Learning Salford schools are all doing in relation to esports.

In terms of my presentation I would like to just share some of my thoughts and 5 pieces of advice in relation to esports, based on my experiences at Millfield, and as shared at the event.

It is not all neon lights

When you think of esports and when you look at professional events it’s all neon lights and high-powered PCs, expensive gaming keyboards, mice and headsets.   From the point of view of schools, this is difficult to square away especially where funding is often limited.    Although creating such environments may have its advantages it isn’t a requirement.  When we launched esports at Millfield we had a couple of IT labs which needed to be updated, plus we were moving to standard desktops rather than the overpriced all-in-ones we had previously.   We knew that the labs needed to be appropriate for Computing teaching and we didn’t want to distinguish these rooms from our other IT labs which weren’t up for replacement.   As such, in looking to prepare to deliver some esports provision we basically increased the spec of the PCs in terms of the graphics card, processor and memory, but opted to keep it in the same PC chassis we normally used.    So, we had two labs with PCs capable of running Overwatch 2, League of Legends and other esports games but the labs themselves didn’t look any different to other IT labs.  I note the higher-spec machines had other potential benefits beyond esports in terms of software they could run to support Computing, Art and other subjects.  That said, later when we started looking at esports and Rocket League in particular at our prep school we simply used the i5, 8Gb PCs we already had, and this worked fine.

Small is good

Now our upgrade work involved two labs as these labs were up for refresh anyway and therefore all we were doing was increasing the cost a little in line with higher spec machines however there is no need to go full lab.  If looking at Rocket League for example it might be ok to have only 3 machines to run a team playing against other schools, or maybe have 6 machines to allow two internal teams to play off against each other.   You can scale the equipment based on your available financial resources combined with your anticipated interest in your planned esports provision.

Beware updates

One thing that has snagged me a few times, usually after a holiday period has been game updates.   Myself and the students have rocked up ready for a bit of Overwatch 2 for example, following the easter break to find each machine needs a 6 or 7Gb update.   Queue a wait before you can get a match started and queue my network team asking what the hell is eating up all of our internet bandwidth suddenly.   As such it is well worth planning to check and update games towards the end of holiday periods to reduce the risk.   The game vendors might still release an update but hopefully by keeping on top of things it will be a smaller rather than cumulative update, and therefore a lesser delay.

Consumables

We haven’t provided any fancy keyboards or mice, which may make us a little less competitive, but it means where there is wear and tear we can quickly replace it.   That said I haven’t seen significant issues with keyboards and mice, however where we have used controllers, these seem to suffer wear and tear and therefore factoring this in to allow for occasional replacement is well advised.   In terms of headsets, the key is to avoid going too cheap, ideally spending a bit more on good headsets, which therefore, with careful treatment by students, are likely to last longer.   I learned this lesson in relation to headsets as an IT teacher years ago, that spending a bit more makes sense and that savings in the short term, on cheaper headsets, often ends up more expensive in the longer term.

Work across year groups

Initially, when I looked at esports I focused very much on getting students in teams with their peers, in the same age group and year group.   This, in hindsight, is I believe a mistake.  I had some issues with low-level behaviour and with the engagement of some students.   As soon as I put students together across year groups it worked much better and I also think it required students to develop their communication and collaboration skills more, given they were having to work with students who may be younger and older, but towards a common aim of winning their match.  I would therefore recommend any esports provision allows students to work across year groups, although within reason.

Conclusion

The FE colleges are doing some amazing things in relation to esports, often spurred on by offering esports BTecs as a programme of study.   Schools lag behind but the potential benefits are the same and the cost of getting involved is minor.   You don’t have to have a room painted black, with neon strip lights, expensive gaming chairs and £2000+ PCs.     All you need is a couple of PCs with the appropriate specification and you can get started.   It was great to hear from Lowry Academy and some of the other United Learning schools in relation to their recent pilot of esports and their Rocket League competition across 4 schools. The student enthusiasm was obvious for all to see. I can only hope that following this event more schools get involved. I look forward to continuing to support the growth of esports in schools and seeing more schools pick up on the potential which esports has to offer.

Is doing more and efficiency our aim?

I have long been concerned by the “do more”, and “be more efficient” narrative which seems to surround our everyday lives.   We are constantly seeking to improve in all we do, which I think is a fair endeavour, but at what cost?   This was recently brought further into focus as I started reading “Thank You for Being Late: An optimists Guide to Thriving in the Age of Accelerations” by T.L. Friedman as I found myself with an hour to spare while waiting to meet someone.   I found myself that bit more content and relaxed as I used the extra hour which had become available to start reading the book and to engage in a bit of people-watching, watching the world rush about its business.  But are these opportunities to stop and reflect reducing in frequency and length?

I look at teaching for example, where I qualified as a teacher back in the late 90’s.   Looking at teaching now, there are so many more things to consider and to do whether this relates to educational research that we are considering, safeguarding, well-being, health and safety, neurodiversity, and much more.  Now all of these things are important but each is another thing to consider, additional cognitive load, or an additional process or task which needs to be completed.  Is there an extra resource in terms of time or cognitive capacity to undertake these things?   The answer is No.   We simply fold them into our everyday workload, which invariably means that although our efforts are getting better, we are also doing more than we ever did before.

Now generative AI can help a little here in that it can help us with some of the heavy lifting and free up some time for us.    This particular post was edited with the help of AI although it wasn’t initially drafted with AI;  I didn’t draft it with AI as this is very much a brain dump of thoughts and as yet AI solutions can’t interface with the human brain, although that may become possible at some point.    But in editing it with AI, I was able to proofread and make changes quicker than I would have been able to do myself therefore reducing the time taken to produce the post.    The challenge here however is this still all exists against a backdrop of “do more”, so the time I may have gained through the help of AI may simply be swallowed up by the next task I need to undertake to continue down the road of continual improvement.   In effect, the net benefit of AI may be quickly nullified by our continued drive for efficiency and maximising output.

Circling back to teaching, this therefore means that generative AI may benefit teachers for a short period, but that eventually, the benefits may simply dissolve in the face of ever-increasing requirements.    But the benefits are so important, that extra time might allow for greater teacher reflection on teaching practice, student learning and student outcomes, it might support greater networking and sharing of ideas plus might support improved well-being for teachers, which I would suggest may result in better teaching, better student outcomes and also better student wellbeing as the students see their teachers modelling good wellbeing practices.   The time AI solutions will provide might support us in spending more time on focussing on what it means to be human and on “human flourishing”.

 Maybe we need to question to “continual improvement” and “efficiency” narratives in that they need to exist in balance and cannot be assumed to be the “right” path.   In relation to continual improvement, I often refer to MVP, minimum viable product and “good enough”.    In relation to efficiency, if I wanted to be more efficient maybe I should stop taking breaks or work through my lunch.    We also need to consider decreasing marginal gains, and maybe that is where we are now, that a lot of the improvements we are bringing about are minor, iterative improvements, but at the cost of cognitive load, time and other resources which may outweigh the resultant benefit.   The extra effort required for each incremental change remains the same, yet the resulting gain is reduced with each change. There is also the challenge of complexity, where more complex processes or systems often bring about greater risk of failure or greater reliance on particular people or tools.And I haven’t even mentioned the speed of change, which the book I am reading refers to in its title, in the “age of accelerations”.   So all of this is happening quicker than ever before which therefore suggests the amount of time we have available to adapt to changes is decreasing.

I don’t have any answers here, so the purpose of this post is not to share a solution, but to pose a question.   I think I know the answer to the question, but not necessarily the answer to the problem it hints towards, but I think the best thing we can do is to start to talk about it and consider it.   So what is the question:

Can we keep adding to the things we need to think about, the processes and the complexity of our lives, or is there a limit?   

AI and general knowledge

I recently was musing on the benefits of general knowledge.   A recent conference I attended involved Prof Miles Berry where he talked about Generative AI as being very well-read.   I had previously seen a figure of around 2000- 2500 years quoted in terms of the time it would take a human to read all of the content included in the training data provided to GPT 3.5, which in my view makes it very well read indeed.   So, I got to wondering if it is this broad base of knowledge which makes generative AI, or at least the large language models so potentially useful for us.

A doctor and AI

Consider, for instance, a medical practitioner. While their expertise lies in diagnosing and treating illnesses, plus their bedside manner and ability to interact with patients and other medical practitioners, their effectiveness as healthcare professionals hinges on a robust understanding of anatomy, physiology, pharmacology, and medical ethics—domains that draw upon general knowledge. Similarly, an engineer relies on principles of mathematics, physics, and material science to design innovative solutions to complex problems.    As a professional, we are required to study and learn from this broad body of knowledge through degree programmes and other qualification or certification requirements.   But we are inherently human which means just because we have learned something at some point, and successfully navigated a qualification or certification route, doesn’t mean that we will remember or be able to access this information at the point of need.    If the medical practitioner therefore uses the AI to assist them initially, they will therefore be drawing on a bigger knowledge base than a human is capable of consuming, plus a knowledge base that doesn’t forget, or fail to remember content at some point learned.     The medical practitioner will still apply their experience and knowledge to the resultant output, bringing their human touch to help address the challenges of generative AI (bias, hallucinations, etc) however the use of generative AI to assist would likely make diagnosis quicker and possibly more accurate.

My changing workflow

The above seems to align with my views in relation to workflows I have changed recently to include generative AI.  Previously I might have known what I wanted to write and therefore get to writing rather than seeking to use generative AI.    Now I realise that, although I know my planned outcome, something which generative AI cannot truly know, no matter how much I adjust and finesse my prompts, generative AI brings to the table a huge amount and breadth of reading I will never be able to achieve.    As such, starting out by asking generative AI is a great place to start.    It will give you an answer to your prompt but will draw upon a far bigger reservoir of knowledge than you can.   You can then refine your prompt based on what you want to achieve, before doing the final edits.    It is this early use of generative AI which I think is the main potential for us all.   If we use generative AI early in our workflows we both get to our endpoint quicker, plus it also opens us up to thoughts and ideas we might never have considered, due to generative AI’s broader general knowledge. I still point my own personal stamp on the content which is produced, making it hopefully unique to my personal style and personality, but AI provides me with assistance.

Challenges and Considerations

Despite its tremendous potential, the integration of generative AI into everyday life and specialized domains poses several challenges and considerations. Chief among these are concerns regarding the reliability and accuracy of AI-generated content, as well as issues related to bias, ethical considerations, and privacy concerns. I do however note here that the issues of reliability, bias, ethics and privacy are not purely AI problems and are actually human and societal issues, so if a human retains the responsibility for checking and final decision-making, then the issue continues to be that of a human rather than AI issue.

Conclusion

Generative AI stands as a transformative force in harnessing and disseminating general knowledge, empowering individuals with instant access to information, facilitating learning and comprehension, and augmenting domain-specific expertise.    It provides a vast repository of knowledge acquired from its training data, which can be used to assist humans and augment their efforts.   I note this piece itself was generated with the help of generative AI, and some of the text and ideas contained herein are ones I may not have arrived at myself, plus I doubt I would have completed this post quite so quickly.    So, if AI is providing a huge knowledge base and assisting us in terms of getting to our endpoint more quickly, plus opening up alternative lines of thinking, isnt this a good thing?   

For education though I suspect the big challenge will be in terms of how much of the resultant work is the students and how much is the generative AI platforms.   I wonder though, if the requirement is to produce a given piece of work, does this matter, and if AI helps us get there quicker, do we simply need to expect more and better in a world of generative AI?

I suspect another challenge, which may be for a future post, is the fact that Generative AI is a statistical inference model and doesnt “know” anything, so is it as well read as I have made out? Can you be well read without understanding? But what does it mean to “know” or “understand” something and could it be that our knowledge is just a statistical inference based on experience? I think, on that rather deep question, I will leave this post here for now.

Google Discovery Day

I was lucky, thanks to a kind invite from Gemma Gwilliam, a colleague from the Digital Futures Group (DFG), to join staff from several Portsmouth schools in a visit to the Google offices in London.  Now I note my school largely uses Microsoft however I have made use of Google as the primary platform in previous schools I have worked with.    For me, the focus for all schools should be using the best tool for the job and therefore this may involve using Google and Microsoft tools at different times and for different jobs.   In this post, I would like to share just a couple of my key takeaways from the event.

Accessibility

This was definitely one of the key areas for the event in discussing the various gaps which exist within education, whether they are academic performance gaps or digital gaps.    The gap related to disadvantaged students, in particular, was discussed but also gaps in relation to accessibility related to special educational needs and disabilities were also raised, including a visit to the Google Accessibility Discovery Centre (ADC).  It was key for discussions and the various sessions which were delivered that technology, including Google technology, has such potential to help us with narrowing these gaps but in itself this presents a bit of a paradox as we would need to first address the gap of access to reliable infrastructure, devices, support, etc.   

Artificial Intelligence (AI)

Unsurprisingly AI was on the list of discussion points and I was really happy to hear some of the same messages I have provided being reiterated.   I liked the example used in terms of how a generative AI solution works in particular.  We as humans when given a question use the information we have absorbed to predict the answer, and a generative AI solution isn’t that much different.  I also liked the comment in relation to hallucinations being a term we should avoid however my concern has always been about this anthropomorphising genAI solutions whereas on this occasion it was raised that it was providing an answer we didn’t expect or which was simply wrong;   Would we want our students claiming they had simply hallucinated or is a wrong answer a wrong answer?      The key here was definitely that AI will increasingly make its way into our daily workflows and the suggestion was that for many of us, it will simply appear in the products we already use and therefore will be almost transparent to us.   This seems to ring a bell as we have been using AI for a while in our spellcheckers, preference functionality in Amazon and Netflix and our search engines, yet have never really identified it as being AI as opposed to being simply how the platform works.

Networks and sharing

One of the key takeaways from this event as with so many other events I have attended is the power of a group of people sharing.   We might not all operate in the same context in terms of our schools, or have the same views, but together sharing ideas, successes and failures, we are all better collectively for it.   David Weinbergers quote continues to be my go-to quote:  “The smartest person in the room is the room”.   The more we share, the more we come together and discuss, accepting disagreement as much as we accept agreement, being brave and encouraging diverse people and views, the better we all are.  

Context is king

One of the other points which really stuck with me in the event was in a presentation which talked about educational research.  The key thing which chimed with me was a warning regarding people who quote that “research says…..”.    I have heard this so often however the reality is that most research is limited in scope to be suggestive in terms of the context, impact, application, etc.    That’s not to discount research as educational research is very important, but we mustn’t lose sight of the importance of context and how something that succeeded or failed in one content, may do the absolutely opposite in a different context.    Education is simply too complex with too many moving parts, the students, the teachers, the parents, the school and many more variables which means that research can be very helpful but it will never provide a cause and effect.   So it’s a great guide and provider of direction but never an out-and-out proof of what will work across all schools, students, etc, in general.

Conclusion

I very much enjoyed the event and feel I took quite a bit from it.   My day-to-day largely does involve Microsoft however I try to avoid referring to my school as a Microsoft school.   We seek to use the tools which have the best impact so it was great to see and hear what Google have to offer and there definitely was a lot that they can offer.  And, an opportunity to network with staff from other schools and contexts is always valuable.    This I suppose is why I believe so strongly in the Digital Futures Group which myself and Gemma are part of, and without which I am not sure this opportunity would have arisen for me.   The more networks like this that exist the better, and hopefully the DFG will help show some of the potential impact and point the way for others looking to set up similar networks.