
Coursework continues to be a significant part of qualifications, whether this be GCSEs, A-Levels, or vocational qualifications like the BTec qualifications. In BTecs coursework is the main assessment methodology, where this hasn’t changed that much since I actually had a hand in writing some BTec units and acting as a standards verifier. The world around these qualifications, though, has changed particularly with the availability of Generative AI, so how do schools manage the use of AI by students, the requirements of examining bodies and the ethical need to ensure fairness in marking and assessment?
Firstly lets just accept students are using AI. This is a statement which I myself have made and that I have heard others make. The challenge is that we are often referring to ChatGPT, Gemini, Claude and the likes, and to things post November 2022. The reality is that students were using AI prior to that. They were using spellchecker, they were using grammar checkers and they were using google for searches. Each of these involves AI. AI isn’t new so let dispense with the concern regarding students using AI to cheat.
A students “own” work
So, when looking at coursework or NEAs (Non-examination assessments) JCQ states that work “an individual candidate submits for assessment is their own”. At face value this makes sense but what constitutes the students “own” work. This blog piece for example has seen AI highlight spelling errors which I have since corrected, plus I have had suggested alternative sentence structures and grammatical changes recommended, with AI behind these recommendations. With these changes is it still my own work? And in this case I am writing this directly from my thoughts rather than with a structure however if I asked AI for some help on the structure of the blog piece before writing would it still be mine? Having completed I posted this on my site, but I could have fed it into AI for feedback and suggested improvements; would the resultant blog post still be mine? And how is this use of GenAI different from using spell check and grammar checker and the editor built into word? In all cases it results in a piece of work which wasn’t what I originally typed, but is likely better.
Referencing: Why bother?
JCQ mentions that candidates must not “use the internet without acknowledgment or attribution”. Again, on face value this seems fair, but what about spellchecker and grammar checker. In all my years I have never seen anyone reference Microsoft or Googles spelling and grammar checkers yet I am pretty sure they have almost always been used. So why might Grammarly or ChatGPT or even the Editor in MS Word be different?
And if we accept that students are using spellchecker, grammar checker and almost certainly using generative AI tools, surely they just end up noting they are using them which doesn’t seem to help from an assessors point of view. With a traditional reference to a book an assessor could at least go and look it up, but when a student uses generative AI exactly how do I cross reference this. And if I cant what is the value in the reference, and especially so if almost every student basically states they made some use of AI including generative AI.
Coursework: A proxy for learning
The challenge here is that we are using coursework as a proxy for testing a students learning, their knowledge and understanding. It used to be that a piece of coursework was a good way to do this, then we got Google. We now needed to check for unusual language, etc and then use Google itself to try and prove where students had plagiarised. And more recently we have generative AI and things are a bit more difficult still. We can no longer use Google to check the document for plagiarism and don’t get me started on AI detection solutions, as they simply don’t work.
Maybe therefore we need to go back to basics and if in doubt speak to the student. If we are unsure of the proxy, of coursework, then we need to find another way to cross check or to assess. This could be a viva, asking the students to explain what they meant within sections of their coursework, or asking them to provide examples, or we could ask them to present rather than write their coursework. In each case we get to assess the students confidence, body language, fluency, etc, in relation to the topic being assessed, rather than just what they have written down. So maybe rather than seeking to block or detect AI use, we need to accept that we need to find new ways to assess.
A way forward?
A key starting point, in my view, with students in that of education. Students need to know what AI is and understand what is acceptable in terms of AI use. They need to understand the difference between using AI tools as an aid, such as spellchecker, grammar checkers and even generative AI, versus using it to do the work. It might be fair to get help with my work in eliminating spelling errors. It might also be fair to help me in better structuring my thoughts or my written words. But it isn’t fair if the AI writes the piece of work for me and I just present it as my own but where there is no real effort on my part, no real sense of my views, in what is produced. I suppose it’s a bit like discussing the work with a friend; If we discuss the work which leads to a better result produced by me then this is good, but if my friend does the work for me then this isn’t. But things are a little more nuanced than that sadly, so how much help is acceptable?
The challenge with the above is that some students will use AI correctly and some will, for various reasons, use it incorrectly or even dishonestly. How will we tell? I suspect some of this is down to professional judgement and knowing our students and some is audit tracking tools such as version history. That said I think the easiest way for us to tell is to get to the root of learning and ask the students to explain that which they have submitted or at least part of it. If it’s a good piece of work and they can explain it, then clearly they have learned the content and the work is representative. If it’s a good piece of work and they cant explain it then it isn’t and therefore they shouldn’t get credit.