Exam Results 2024

It’s the start of the exam results period this week with the release of the Cambridge International results ahead of the A-Level exam results on Thursday and GCSE results the following week.   The pressure on teenagers to perform well in exams such as GCSEs and A-Levels is immense. Schools, parents, and society place great emphasis on achieving high grades, often portraying these results as the ultimate determinant of a young person’s future success. However, while exam results are undeniably important, they are not the be-all and end-all of a teenager’s life. There are numerous other factors that contribute to personal development, mental well-being, and long-term success that deserve equal, if not more, attention.

The Limitations of Exam Results

Firstly, it’s crucial to recognize the limitations of exam results. Exams typically measure a specific type of intelligence which revolves around memorization, understanding of theoretical concepts, and the ability to perform under exam pressure within a limited time frame. However, intelligence is multifaceted. Skills such as creativity, emotional intelligence, problem-solving, leadership, and the ability to work well in teams are not measured by traditional exams but are equally vital in both personal and professional contexts.   I also note that school and college exams, being sat in a hall with hundreds of other students, is far from ideal for many students, plus is not really representative of the kind of experiences students will encounter in life beyond formal education, and there is the research which points to the environmental conditions in exam halls in relation to heat, CO2, etc simply not be conducive of maximum performance.    

Mental Health and Well-Being

This intense focus on exam results can have detrimental effects on a teenager’s mental health. The pressure to achieve high grades can lead to anxiety, depression, and burnout where mental well-being is a critical component of a teenager’s overall development.    Teens need time to relax, explore their interests, and build relationships. These aspects of life help develop a balanced and healthy individual.   I note that I put too much pressure on myself in relation to exams when I was younger, which led me to difficulties with mental health mainly related to results below that which I expected in my Higher grade exams in Scotland.    These results were not the end of me, as I felt they were at the time, but actually spurred me on and taught me a valuable lesson regarding the unpredictable nature of the future, and how so-called life defining moments, such as exam results day, are simply just a stop on a much longer life journey.

The false narrative

The focus on exam results, as will be evident in the press over the coming week or so, perpetuates a false narrative of winners and losers, where students are unfairly categorized based on their results. This binary perspective suggests that those who achieve high grades are destined for success, and are “successful”, while those who do not make these grades are not. Such a narrative overlooks the diverse range of talents, skills, and potentials that exist beyond the confines of standardized testing.  It also overlooks the multitude of factors which might impact a students exam performance.   Success, or failure, in my view, is determined by the longer picture of your life, your contributions, your memories and efforts, and exam results are a small part of that in the longer term.

Conclusion

Life is a journey rich with diverse experiences, and exams are just one of many milestones along the way. Whether a pass or a fail, exams do not define a person’s entire future; they are simply a moment in time, a stepping stone on a much larger path.  For those getting their results this week and next, know that they are a small part of your life.   If you do well, celebrate, and then move onto the next stage in your life.    If you don’t do well as defined by your own expectations rather than the press or others, learn from this.   It may be that exams weren’t for you, that your performance was impacted by external issues, that the courses didn’t fit your interests, that you didn’t put in enough effort or many other factors;  That’s fine, so pick your next steps and move forward.   I know I did and with that I will leave but one last thought:  I learned from my disappointing Scottish S5 Higher results and worked harder to get to University in S6 but it was many years later before I learned about the need for balance in relation to study or work, and our wider lives.    How do you balance out work/study with the wider need for fun, enjoyment and need for experiences and human flourishing?   As your exam results come and go, I would suggest this is something well worth considering.

2023 Exam Results: A prediction

And so exam results day once again approaches and I would like to share a psychic prediction: That the newspapers will be filled with headlines as to how A-Level results have fallen when compared with last year. 

Ok, so it isnt so much psychic as based on what we know about the UK exams system.    We know that each year the grade boundaries are adjusted and that the trend pre-pandemic was for grades generally to be increasing year on year.    The ever increasing grades werent necessarily the result of improving educational standards or brighter students, although both of these may or may not be the case, they were the result of a decision taken when setting grade boundaries.    With the student exam scores available, the setting of the grade boundaries decided how many students would get an A*, an A, etc and therefore the headline results.    It’s a bit like the old goal seek lessons I used to teach in relation to spreadsheets.   Using Excel I could ask it what input values I would need to provide in order to attain a given result.    So, looking at exam results, what grade boundaries would I need to set in order to maintain the ever increasing grades but while also avoiding it looking like grade inflation or other manipulation of the results.  Now I note that in generally increasing grades across all subjects, some subjects showed more improvement than others, with some subjects showing dips, but summed across all subjects the results tended to show improvement year on year.

And then we hit the pandemic and teacher assessed grades and the outcry about how an algorithm was adjusting teacher awarded grades into the final grades they achieved.    Students and parents were rightly outraged and this system of adjustment was dropped.   But how is this much different from the adjustment of the grade boundaries as mentioned above?     The answer is quite simply that the teachers and often students and parents were aware of the teacher assessed grades and therefore could quantifiably see the adjustment when compared against the awarded grade.   When looking at the pre-pandemic exams teachers, students and parents don’t have visibility as to what the students grade might have been before adjustments were made to the grade boundaries.    They simply see the adjusted score and adjusted final grade.  Now I note that a large part of the outrage was in relation to how the grade adjustment appeared to impact some schools, areas or other demographics of students more than others, however I would suggest this is also the case when the grade boundaries are set/adjusted, albeit the impact is less obvious, transparent or well know.

So, we now head into the exam results following the period of teacher assessed grades with students back doing in-person exams.    Looking at this from an exam board level, and reading the press as it was after the 2022 exam results, we know that a larger than normal increase was reported over the teacher assessed grade years, with this being put down to teacher assessed grades versus the normal terminal exams.   As such I would predict that the exam boundaries will be set in such a way to make the correction.    I predict the exam boundaries will therefore be set to push exam results downwards although it is unclear how much the results will be pushed down.     It may be that the results are reduced slightly to avoid too much negative press or it may be that a more significant correction is enforced based on the fact that this might be easily explained by the previous teacher assessed grades plus also the lack of proper exams experience held by the students who sat their A-Level exams this time;  remember these students missed out on GCSE exams due to the pandemic.

Conclusion

My prediction is that the exam results stats will be lower than last year but not due to students necessarily doing worse, but due to a decision that the results should be worse given last years apparently more generous results plus the fact these particular students have less exam experience than previous years, pre-pandemic.   I suspect my prediction is all but guaranteed but an interesting question from all of this has to be, is this system fair?   I believe the answer is no, although I am not sure I can currently identify a necessarily fairer system.  But I think in seeking a better system, the first step is to identify the current system isnt necessarily fair.

And one more final thought:  To those students getting their results:   All I can simply say is very well done!  This was the culmination of years’ worth of study and effort, and during a period of great upheaval the world over, unlike anything in my or your history to date.   No matter the grades, you did well for getting through it.   The grades, no matter what they are do not define you, but your effort, your resilience and what you decide to do next, your journey is what really matters.    Well done and all the very best for the future!! 

Delaying exams; why?

So, a research study has arrived at the conclusion that due to Covid19 students may be 3 months behind in their studies.     The delaying of exams to allow students more time to catch up has also been discussed.   This all seems like rather simplistic thinking.

There are for me a number of issues with delaying the exams.

The first is that we already accept that exams differ each year and therefore there is already tinkering in place to adjust the grade boundaries to keep some consistency across academic years when looking at the statistical outcomes of students in general.   This is why the result show small but steady changes year on year rather than being more volatile. It seems to me to be fairly easy to just adjust this process to normalise the exam results next year should they be, as would be expected, lower than previous years and should it be important to maintain parity in results across different calendar years. And this statistical fiddle would be more acceptable than the algorithm proposed for 2020 results as it doesnt differ from the statistical adjustments of GCSE and A-Level results in 2019, 2018, etc.

Another issue, if we were to delay the exams, is that it simply knocks on to following years.   So, delay the GCSE exams would mean teachers would lose some teaching time they would likely use to start A-Level studies or to start Year 13 teaching of A-Level subjects following Year 12 exams.  As such it doesnt solve the issue, but rather displaces it. Is the focus not on learning rather than measuring learning? As such how can any solution with a knock on to teaching and learning be acceptable.

Also, the point students should be at the end of each academic year has been arbitrarily determined.   At some point the curriculum for each subject was developed and the content decided for each year or stage however it could have easily been decided that more or less content be added.   Why, therefore, is the point students should be at perceived to be so immovable?Why not simply reduce content for the year based on the reduced time available to students? Surely this is an alternative option.

There is also the point that next years results will be compared with this years results, where it has already been reported this years results were significantly up.   This obviously resulted from the use of centre assessed grades, provided by teachers, without any of the normal annual statistical manipulation in relation to grade boundaries.    This comparison is unavoidable.So, despite any delay, etc, there is still a high likelihood of negative reporting in the press with regards the 2021 results, with knock-ons in terms of students/parents being disappointed.

This bring us nicely to the big question I have seen a number of people ask, which is 3 months behind who or what?     Is it 3 months behind where teachers think they would be had Covid19 not arisen?   A prediction based on a predication doesn’t provide me with much confidence as to its statistical reliability.   Is it three months behind in terms of curriculum content covered at the predicted rate that content is covered?   Again this suffers given it relies on predicated rate of coverage of materials plus could the content be covered at a faster rate but in less depth possibly?

Maybe this issue is an opportunity to reassess our assumptions and to question our current approach regarding education and how it is assessed or are we simply going to accept that this is the way things are done around here and that any changes should be limited and only in maintaining the status quo? I believe we have reached a fork in the road, however I worry that we may look to take the route which looks easier.

A-Level results and football: Another enlightening analysis

Now the A-Level and GCSE results are out the usual sets of analysis and observations based on the data have started making an appearance.    As usual causal explanations have been developed to explain the data, using what Naseem Taleb described as the backwards process.   The resulting judgments have been established to fit the available data without any consideration for the data which is not available.

The perfect example is an article in the guardian (Wales A-Level results raise concerns pupils falling behind rest of UK, Richard Adams, Aug 2016)  discussing the A-level results in Wales as compared with the results in England.   The overall drop in the percentage entries achieving A* and A dropped in England “only slightly to 25.8%” while in Wales I “fell more steeply to 22.7%”.     The causal explanation apparently arrived at by one “expert” was that boys had been “possibly distracted by the national football team’s success at Euro 2016”.    This fails to consider the total number of entries in England when compared with Wales;   I suspect Wales would have less entries therefore resulting in increased variability in Welsh results versus English results.      The data also fails to include any information in relation to the students GCSE results.   Had the Welsh students achieved lower GCSEs results than their English counterparts it may be that their overall lower level of achievement could amount to “better” results given their lower starting point as measured by GCSEs.

Another possible conclusion, which is easy for me to draw as a Scotsman and most likely more difficult for an Englishman, is that the data shows something which wasn’t related to the Welsh football performance at all.    The English A-Level results could be better due to English students throwing themselves into their work following England’s poor showing during Euro 2016.  It’s the same data but a different conclusion which has been generated and made to fit the data available without any consideration for the data which isn’t available.

Having considered further this issue I think I am now more inclined than ever to agree with Talebs comments regarding the importance of the unread books in a library rather than the read ones.    Talebs discusses how a home library filled with read books gives a person the illusion of knowledge; the person has read it all.    A library filled largely with unread books however makes clear all that we do not yet know and have not read.    Reading each of these commentaries and analysis in relation to the A-Level data isn’t making me more informed or more educated, in fact it may be blinding me to the “true” facts or to other possibilities.    I think, therefore, that this will be my last post moaning about “expert” analysis or results as from now on I need to stop reading the analysis in the first place!

 

Some thoughts on GCSE and A-Level results

Having read various articles following the recent A-Level and GCSE results I cant help but think that schools and more importantly education in general needs to make a decision as to what we are seeking to achieve, and stop acting re-actively to limited data which has been used to draw generalized conclusions.

Take for example the shortage of STEM graduates and students.    This was and still is billed as a big issue which has resulted in a focus on STEM subjects in schools.   More recently there has been a specific focus on computer programming and coding within schools.     In a recent article it was acknowledged that the number of students taking A-Level Computing had “increased by 56% since 2011” (The STEM skills gap on the road to closing, Nichola Ismail, Aug 2016).     This appears to suggest some positive movement however in another article poor A-Level ICT results were cited as a cause for concern for the UK Tech industry (A Level Results raise concern for UK tech industry, Eleanor Burns, Aug 2016).  Now I acknowledge this data is limited as ideally I need to know whether ICT uptake has been increasing and also whether A-Level Computing results declined, however it starts to paint a picture.

Adding to this picture is an article from the guardian discussing entries:

Arts subjects such as drama and music tumbled in terms of entries, and English was down 5%. But it was the steep decline in entries for French, down by 6.5% on the year, as well as German and Spanish, that set off alarm bells over the poor state of language teaching and take-up in Britain’s schools.

Pupils shun English and physics A-Levels as numbers with highest grades falls, Richard Adams, Aug 2016)

So we want STEM subjects to increase and they seem to be for computing, however we don’t want modern languages entries to fall.   Will this mean that next year there will be a focus on encouraging students to take modern foreign languages?    And if so, and this results in the STEM numbers going down will we then re-focus once more on STEM subjects until another subject shows signs of suffering.

It gets even more complex when a third article raises the issue of Music A level Entries which “dropped by 8.8% in a single year from 2015 and 2016”.  (We stand back and allow the decline of Music and the Arts at our peril. Alun Jones, Aug 2016).    Drama entries are also shown to have seen a decrease this year (Dont tell people with A-Levels and BTecs they have lots of options, Jonathan Simons, Aug 2016).  So where should our focus lie?   Should it be on STEM subject, foreign languages, drama or Music?

I suspect that further research would result in further articles raising concerns about still further subjects, either in the entries or the results.   Can we divide our focus across all areas or is there a particular area, such as STEM subjects, which are more worthy of focus?  Do the areas for focus change from year to year?

As I write this my mind drifts to the book I am currently reading, Naseem Talebs, The Black Swan, and to Talebs snooker analogy as to variability.     We may be able to predict with a reasonable level of accuracy, a single snooker shot however as we try to predict further ahead we need more data.    As we predict five shots ahead the quality of the surface of the table, the balls, the cue, the environmental conditions in the room, etc. all start to matter more and more, and therefore our ability to predict becomes less and less accurate.      Taking this analogy and looking at schools what chance do we have of predicting of the future and what the UK or world will need from our young adults?    How can we predict the future requirements which will be needed from the hundreds of thousands of students across thousands of schools, studying a variety of subjects from a number of different examining bodies, in geographical locations across the UK and beyond.

These generalisations of data are subject to too much variability to be useful.    We should all focus on our own schools as by reducing the scope we reduce the variability and increase the accuracy.   We also allow for the context to be considered as individual school leaders may know the significant events which may impact on the result of their cohort, individual classes or even individual students.  These wide scale general statements as to the issues, as I have mentioned in a number of previous postings, are of little use to anyone.   Well, anyone other than editors wishing to fill a space in a newspaper or news website.