AI and Coursework

Coursework continues to be a significant part of qualifications, whether this be GCSEs, A-Levels, or vocational qualifications like the BTec qualifications.   In BTecs coursework is the main assessment methodology, where this hasn’t changed that much since I actually had a hand in writing some BTec units and acting as a standards verifier.   The world around these qualifications, though, has changed particularly with the availability of Generative AI, so how do schools manage the use of AI by students, the requirements of examining bodies and the ethical need to ensure fairness in marking and assessment?

Firstly lets just accept students are using AI.   This is a statement which I myself have made and that I have heard others make.   The challenge is that we are often referring to ChatGPT, Gemini, Claude and the likes, and to things post November 2022.   The reality is that students were using AI prior to that.   They were using spellchecker, they were using grammar checkers and they were using google for searches.   Each of these involves AI.   AI isn’t new so let dispense with the concern regarding students using AI to cheat.

A students “own” work

So, when looking at coursework or NEAs (Non-examination assessments) JCQ states that work “an individual candidate submits for assessment is their own”.    At face value this makes sense but what constitutes the students “own” work.   This blog piece for example has seen AI highlight spelling errors which I have since corrected, plus I have had suggested alternative sentence structures and grammatical changes recommended, with AI behind these recommendations.    With these changes is it still my own work?     And in this case I am writing this directly from my thoughts rather than with a structure however if I asked AI for some help on the structure of the blog piece before writing would it still be mine?    Having completed I posted this on my site, but I could have fed it into AI for feedback and suggested improvements;  would the resultant blog post still be mine?   And how is this use of GenAI different from using spell check and grammar checker and the editor built into word?    In all cases it results in a piece of work which wasn’t what I originally typed, but is likely better.

Referencing: Why bother?

JCQ mentions that candidates must  not “use the internet without acknowledgment or attribution”.   Again, on face value this seems fair, but what about spellchecker and grammar checker.    In all my years I have never seen anyone reference Microsoft or Googles spelling and grammar checkers yet I am pretty sure they have almost always been used.    So why might Grammarly or ChatGPT or even the Editor in MS Word be different?     

And if we accept that students are using spellchecker, grammar checker and almost certainly using generative AI tools, surely they just end up noting they are using them which doesn’t seem to help from an assessors point of view.   With a traditional reference to a book an assessor could at least go and look it up, but when a student uses generative AI exactly how do I cross reference this.   And if I cant what is the value in the reference, and especially so if almost every student basically states they made some use of AI including generative AI.

Coursework: A proxy for learning

The challenge here is that we are using coursework as a proxy for testing a students learning, their knowledge and understanding.   It used to be that a piece of coursework was a good way to do this, then we got Google.   We now needed to check for unusual language, etc and then use Google itself to try and prove where students had plagiarised.  And more recently we have generative AI and things are a bit more difficult still.    We can no longer use Google to check the document for plagiarism and don’t get me started on AI detection solutions, as they simply don’t work.  

Maybe therefore we need to go back to basics and if in doubt speak to the student.   If we are unsure of the proxy, of coursework, then we need to find another way to cross check or to assess.   This could be a viva, asking the students to explain what they meant within sections of their coursework, or asking them to provide examples, or we could ask them to present rather than write their coursework.  In each case we get to assess the students confidence, body language, fluency, etc, in relation to the topic being assessed, rather than just what they have written down.   So maybe rather than seeking to block or detect AI use, we need to accept that we need to find new ways to assess.

A way forward?

A key starting point, in my view, with students in that of education.    Students need to know what AI is and understand what is acceptable in terms of AI use.    They need to understand the difference between using AI tools as an aid, such as spellchecker, grammar checkers and even generative AI, versus using it to do the work.    It might be fair to get help with my work in eliminating spelling errors.   It might also be fair to help me in better structuring my thoughts or my written words.    But it isn’t fair if the AI writes the piece of work for me and I just present it as my own but where there is no real effort on my part, no real sense of my views, in what is produced.   I suppose it’s a bit like discussing the work with a friend;   If we discuss the work which leads to a better result produced by me then this is good, but if my friend does the work for me then this isn’t.    But things are a little more nuanced than that sadly, so how much help is acceptable?

The challenge with the above is that some students will use AI correctly and some will, for various reasons, use it incorrectly or even dishonestly.   How will we tell?    I suspect some of this is down to professional judgement and knowing our students and some is audit tracking tools such as version history.    That said I think the easiest way for us to tell is to get to the root of learning and ask the students to explain that which they have submitted or at least part of it.    If it’s a good piece of work and they can explain it, then clearly they have learned the content and the work is representative.     If it’s a good piece of work and they cant explain it then it isn’t and therefore they shouldn’t get credit.

Technology and Exam boards: Time to modernise?

I recently received a request from a teacher in relation to getting some software installed on their school device to support them in marking for an exam board.    Now I know this isn’t part of their school role however having been a standards moderator in the past, I understand the benefits to schools and colleges of having markers or moderators within teaching departments.   I am therefore eager to try and enable staff by supporting such requests however this request involved a piece of software which requires admin rights to the laptop, both for install and for the operation of the application according to the exam board.   When the concern re: cyber security was raised the exam boards final reply was that the staff member should install the software on a personal rather than school laptop.   This got me thinking about how technology has changed but how exam boards have been slow to change.   This is all the more evident currently.   Just look at the advances in Large Language Models (LLMs) with ChatGPT over the last six months.

Traditionally, examination boards have relied on paper-based tests and manual grading systems. However, these methods have several drawbacks, including the potential for errors and delays in results processing.    One way examination boards could modernize is by moving towards computer-based testing. Computer-based testing allows for faster and more accurate grading, as well as the ability to customize exams to the specific needs of each student.  I very much believe that adaptive testing is the way forward, with this also enabling students to take exams in their own time when they are ready as opposed to at a set time with all other students.   Adaptive testing also supports students taking their tests anywhere, including at home, rather than having to be crammed into a large exam hall where the conditions themselves are not exactly designed for optimum student performance.    Additionally the results would be available much quicker reducing the stress associated with a long waiting period between the exams and the results being released.   There is also the potential benefit in the reduction in the amount of paper used in exams, transporting of these papers, etc, which may help with making the exam process more environmentally friendly.

Another way examination boards can modernize is by utilizing artificial intelligence (AI) in the grading process. This appears all the more relevant at the moment with development in LLMs like Chat-GPT.   AI-powered grading systems can quickly and accurately grade exams, allowing for quicker results processing and reducing the potential for errors. AI can also analyse student performance data to provide insights into areas where students may need additional support and guidance.   Now I note here that the use of AI may introduce new errors to the marking process however I would suggest that the volume or magnitude of these errors when compared with human based marking is likely to be lower.  It isn’t “the solution” to errors but definitely a step in the correct direct.

Related to the above, exam boards need to acknowledge the existence of AI and LLMs and the fact that they will become an increasing part of life and therefore a tool which students will increasingly use in their studies be it for revision, to help in developing critical thought or for creating coursework or other learning content.   So far only IB (International Baccalaureate) have really acknowledged ChatGPT and how they see it impacting on their courses, providing at least some steer for schools on what appropriate or inappropriate usage might look like and proving at least some direction for schools and teachers for managing these new technologies.

Moreover, examination boards can use technology to improve exam security. Online proctoring tools can help ensure that students are taking exams in a secure and controlled environment, preventing cheating and other forms of academic dishonesty.    Related to this, I have seen exam boards continuing to send out resources on CDs or USB drives, or requesting student video or audio work using similar formats.   It is about time that they provided appropriate online portals to allow the quick, efficient and secure transfer of such exam and coursework data.  

Finally, examination boards can use technology to make their exams more accessible to students with disabilities or special needs. For example, screen readers, text-to-speech software, and other assistive technologies can help students with visual or hearing impairments to take exams on an equal footing with their peers.   This is already happening for a subset of students however I suspect eventually will need to acknowledge that all students are individual and having differing learning preferences including their device use and the online tools they use.  In classrooms teachers support students using a range of tools and techniques so it is only correct to seek to support the same in the final exams which are, at least for now, viewed as so critical in a students format education.   As such examination boards will need to adapt to this.

Conclusion

Technology has the potential to revolutionize the examination process, making it more efficient, accurate, and accessible. Examination boards must embrace these technological advancements to ensure that their exams are of the highest quality and that students receive accurate and timely results. By doing so, they can help prepare the next generation of students for success in a rapidly changing digital world.   

And at a time when the pace of technology, particularly in relation to Artificial Intelligence solutions, has never been faster, the exam boards will need to significantly increase their agility and their ability to adapt to and embrace change.