A-Level results and football: Another enlightening analysis

Now the A-Level and GCSE results are out the usual sets of analysis and observations based on the data have started making an appearance.    As usual causal explanations have been developed to explain the data, using what Naseem Taleb described as the backwards process.   The resulting judgments have been established to fit the available data without any consideration for the data which is not available.

The perfect example is an article in the guardian (Wales A-Level results raise concerns pupils falling behind rest of UK, Richard Adams, Aug 2016)  discussing the A-level results in Wales as compared with the results in England.   The overall drop in the percentage entries achieving A* and A dropped in England “only slightly to 25.8%” while in Wales I “fell more steeply to 22.7%”.     The causal explanation apparently arrived at by one “expert” was that boys had been “possibly distracted by the national football team’s success at Euro 2016”.    This fails to consider the total number of entries in England when compared with Wales;   I suspect Wales would have less entries therefore resulting in increased variability in Welsh results versus English results.      The data also fails to include any information in relation to the students GCSE results.   Had the Welsh students achieved lower GCSEs results than their English counterparts it may be that their overall lower level of achievement could amount to “better” results given their lower starting point as measured by GCSEs.

Another possible conclusion, which is easy for me to draw as a Scotsman and most likely more difficult for an Englishman, is that the data shows something which wasn’t related to the Welsh football performance at all.    The English A-Level results could be better due to English students throwing themselves into their work following England’s poor showing during Euro 2016.  It’s the same data but a different conclusion which has been generated and made to fit the data available without any consideration for the data which isn’t available.

Having considered further this issue I think I am now more inclined than ever to agree with Talebs comments regarding the importance of the unread books in a library rather than the read ones.    Talebs discusses how a home library filled with read books gives a person the illusion of knowledge; the person has read it all.    A library filled largely with unread books however makes clear all that we do not yet know and have not read.    Reading each of these commentaries and analysis in relation to the A-Level data isn’t making me more informed or more educated, in fact it may be blinding me to the “true” facts or to other possibilities.    I think, therefore, that this will be my last post moaning about “expert” analysis or results as from now on I need to stop reading the analysis in the first place!

 

Some thoughts on GCSE and A-Level results

Having read various articles following the recent A-Level and GCSE results I cant help but think that schools and more importantly education in general needs to make a decision as to what we are seeking to achieve, and stop acting re-actively to limited data which has been used to draw generalized conclusions.

Take for example the shortage of STEM graduates and students.    This was and still is billed as a big issue which has resulted in a focus on STEM subjects in schools.   More recently there has been a specific focus on computer programming and coding within schools.     In a recent article it was acknowledged that the number of students taking A-Level Computing had “increased by 56% since 2011” (The STEM skills gap on the road to closing, Nichola Ismail, Aug 2016).     This appears to suggest some positive movement however in another article poor A-Level ICT results were cited as a cause for concern for the UK Tech industry (A Level Results raise concern for UK tech industry, Eleanor Burns, Aug 2016).  Now I acknowledge this data is limited as ideally I need to know whether ICT uptake has been increasing and also whether A-Level Computing results declined, however it starts to paint a picture.

Adding to this picture is an article from the guardian discussing entries:

Arts subjects such as drama and music tumbled in terms of entries, and English was down 5%. But it was the steep decline in entries for French, down by 6.5% on the year, as well as German and Spanish, that set off alarm bells over the poor state of language teaching and take-up in Britain’s schools.

Pupils shun English and physics A-Levels as numbers with highest grades falls, Richard Adams, Aug 2016)

So we want STEM subjects to increase and they seem to be for computing, however we don’t want modern languages entries to fall.   Will this mean that next year there will be a focus on encouraging students to take modern foreign languages?    And if so, and this results in the STEM numbers going down will we then re-focus once more on STEM subjects until another subject shows signs of suffering.

It gets even more complex when a third article raises the issue of Music A level Entries which “dropped by 8.8% in a single year from 2015 and 2016”.  (We stand back and allow the decline of Music and the Arts at our peril. Alun Jones, Aug 2016).    Drama entries are also shown to have seen a decrease this year (Dont tell people with A-Levels and BTecs they have lots of options, Jonathan Simons, Aug 2016).  So where should our focus lie?   Should it be on STEM subject, foreign languages, drama or Music?

I suspect that further research would result in further articles raising concerns about still further subjects, either in the entries or the results.   Can we divide our focus across all areas or is there a particular area, such as STEM subjects, which are more worthy of focus?  Do the areas for focus change from year to year?

As I write this my mind drifts to the book I am currently reading, Naseem Talebs, The Black Swan, and to Talebs snooker analogy as to variability.     We may be able to predict with a reasonable level of accuracy, a single snooker shot however as we try to predict further ahead we need more data.    As we predict five shots ahead the quality of the surface of the table, the balls, the cue, the environmental conditions in the room, etc. all start to matter more and more, and therefore our ability to predict becomes less and less accurate.      Taking this analogy and looking at schools what chance do we have of predicting of the future and what the UK or world will need from our young adults?    How can we predict the future requirements which will be needed from the hundreds of thousands of students across thousands of schools, studying a variety of subjects from a number of different examining bodies, in geographical locations across the UK and beyond.

These generalisations of data are subject to too much variability to be useful.    We should all focus on our own schools as by reducing the scope we reduce the variability and increase the accuracy.   We also allow for the context to be considered as individual school leaders may know the significant events which may impact on the result of their cohort, individual classes or even individual students.  These wide scale general statements as to the issues, as I have mentioned in a number of previous postings, are of little use to anyone.   Well, anyone other than editors wishing to fill a space in a newspaper or news website.

 

 

 

 

 

Standardized Testing

I have written a number of times about my feelings with regards standardized testing.    (You can read some of my previous postings here – Some thoughts on Data , Building Test Machines).   Having worked internationally in schools in the Middle East I am particularly aware of standardized testing and the weight put on the results from such testing.   Within the UAE there is a focus on ensuring that education is of an international standard with the measure of this international standard being the results from PISA and also from EMSA testing regimes.    As a result individual schools and their teachers are expected to pore over the EMSA results and analyse what the results mean.    I feel that this focus on a standardized testing regime such as PISA is misplaced as how can we on one hand seek differentiated learning tailored to students as individuals while measuring all students with the a single standardized measure.

As such it was with great interest I read the article in the TES titled, “Ignore Pisa entirely,’ says world expert”.     The article refers to comments provided by Professor Yong Zhao who I was lucky to see at an SSAT conference event back in 2009.    Back then I found Professor Zhao to be both engaging and inspiring as a presenter, with some of his thoughts echoing some of my own plus also shaping some of the thoughts and ideas that I came to develop.    Again I find myself in agreement with Professor Zhao.    I particularly liked his comment regarding the need for “creativity, not uniformity”.

I feel the focus on PISA is the result of valuing what is measurable as opposed to measuring what is valued.      Measuring student performance in a standardized test is easy, with various statistical methods then allowing for what appears to be complex analysis of the data, therefore lending us to be able to prove or disprove various theories or beliefs.     Newspapers and other publishers then sensationalize the data and create causal explanations.   Education in Finland was heralded to be excellent recently as a result of the results from PISA testing.     Teaching in the UAE was deemed to be below the world average however better than most other Middle East countries.    Did PISA really provide a measure of the quality of education?    I think not!

Can education be boiled down to a simple test?   Is a students ability to do well in the PISA test what we value?    Does it take into consideration the students pathway through learning as the pathway differs from one country to another?   Does it take into consideration local needs?   Does it take into consideration the cultural, religious or other contexts within which the learning is taking place?    Does it take into account students as individuals?    Now I acknowledge that it may be difficult or even impossible to measure the above however does that mean that we accept a lesser measure such as PISA just because it is easier?

There may be some place for the PISA results in education however I feel we would be much better focusing on the micro level, on our own individual schools and on seeking to continually improve, as opposed to what Professor Zhao described as little more than a “beer drinking contest”.

 

Microsoft Innovative Educator Expert

It was last night that I finally found out that I had been included on Microsoft’s list of Microsoft Innovative Educator Experts.   My original self nomination and supporting materials had gone in a couple of months earlier and it had been with some nerves that I awaited the originally advertised release date for the list of the 1st August.     It was again that I nervously waited for the revised date of the 15th August.   This date arrived and the working day came and went.    I saw a tweet suggesting the date had again been changed this time to the 16th so it looked like the nervous wait would continue.   Then at around 10pm on the 15th I saw another tweet this time including a link to the new list.     I promptly downloaded the document and scrolled through to the UK section where I was pleased to find my name.    The wait is over.

So what does it mean to me to be an MIEE?

Well it means sharing, sharing and sharing some more.    It means having access to a network or even better a community of educators who are making use of Microsoft products to enrich, enhance and re-imagine the learning experience for the students in our schools.   It means as part of this community, being a contributor and not just a consumer.     As such I have an expectation of myself that I will share ideas and contribute on a regular basis, giving back as much as I am taking.

Now being a MIEE will not make me focus purely on Microsoft products.  I am also a Google Certified Educator plus I work in a 1:1 iPad school.   The focus is on students and on learning.   The technology, whether it be Microsoft, Apple, Google, another vendor or even a mix of vendors is not important as the technology is just a tool to achieve an aim; the aim of providing excellent learning opportunities for students.   This being said, I hope that as a MIEE I will be able to access ideas and tools relating to Microsoft products and then share these with others.   This should allow me to build on some of my recent experimentation with Microsoft products including the likes of Sway, Snip, Office-Mix and Lens.

So today marks my first day as a Microsoft Innovative Educator Expert.   I look forward to the year ahead and to hopefully living up to the title.

 

Predicting the future

Recently I have cause to review the schools 5 year plan for IT with a view to updating it however in doing so I have come to question the process.

Part of the reason for questioning the process lies with my recent reading of Thinking, Fast and Slow by Daniel Kahneman and also The Black Swan by Nassim Taleb which I am currently reading.    In both books the author examines our ability to predict the future, with both authors arriving at the same conclusion, being that human beings track record in relation to future predictions is generally poor.    Both authors cite a variety of projects, where predictions in relation to costs and timescale in particular are required.   In each of the examples the project either ends up taking significantly longer or costing significantly more than anticipated.

Thinking about my own context, I oversaw an IT overhaul in a school back in 2007.   This was very much focused around updating the existing server and client PC infrastructure and developing a long term, 5 year plan for maintaining and renewing the equipment.   Had I been able to predict the iPad and its potential applications in education, which was only 3 years away and therefore well within the scope of my 5 year plan, my plan and also some of my actions may have been significantly different.

Given the above I have taken a different approach to my new 5 year plan.   I now accept that the further away from today, the more variable and unpredictable the future is.    Reflecting, this should not have been a surprise as unpredictability is compounded, like financial interest over time.   If you look at your mortgage bill and the total repayment amount you can see how a few percentile points compounded over a period of time can result in a significant increase on the original figure.   So a small percentage of unpredictability played out across a number of years would result in a significant level of unpredictability.

My new 5 year plan has a lot of detail in terms of what is planned for the coming 12 months.    Planning for the 2nd 12 months contains less items as this period is more difficult to predict, with planning for the 3rd year and beyond being progressively less detailed in line with the increasing level of unpredictability.

Now my thinking thus far has been focused on longer term planning in the magnitude of around 5 years with quite clear implications for periods extending beyond 5 years however is a similar issue applicable to short periods?    Can we accurately predict things within a single academic year?  If the answer to this is no then what implications might this have for planning within schools?    I also wonder about lesson planning however that may be for another posting.

How often do you engage in long term planning and in doing so have you considered quite how unpredictable the world is?

 

Schools and Big Data

As Director of IT I am often directly involved with our School Management Information System (MIS, sometimes referred to as a Student Information System, SIS).   Throughout my career I have encountered and worked with a number of different MIS vendors.     My general opinion is that they are all “much of a muchness” as although they have different features, strengths and weaknesses, when you average them out the benefits and drawbacks are equal in terms of their magnitude.

These systems contain and allow us to collect a variety of data including both formative and summative student performance data.    We then design reports which allow us to interrogate the data and display it in different data.    This addresses the functionality side of an MIS however is rather weak in terms of the usability.    Users need to know which report displays which information so they can select and use the correct report at the correct time.

Within my school we are currently working on making our system more usable by developing a dashboard system to present important information directly to teachers without them have to seek it out.   This would represent an improvement however I feel still falls some way short.

One way improvement could be brought about on the above is to put more power in the hands of the users, allowing them to easily create their own reports using the data which is available.    The issue with this is it relies both on staff having the skills in data analysis to be able to design effective reports, plus it relies on them having the motivation to undertake this task.   Personally I believe this approach would be very beneficial for a small number of staff within a school, with the majority being unable to access it, even where the schools culture is very much around the use of data.   It would also potentially add another job to teaching staffs role in the need for them to design reports to analyse their data, which would represent an issue given the current situation in relation to workloads.

I think the solution lies with Big Data.   Within the IT world there is a lot of discussion with regards Big Data where large data sets are analysed to reveal trends or patterns, with this info then presented to users.   I see this as being of benefit in education.   As opposed to having to check different reports showing different sub-sets of our data such as the performance of male students vs female students, the system would identify the trends that exist for us.   The system would identify where there are correlations, without users needing to be aware of a potential correlation, therefore removing the potential for a correlation to be missed as we weren’t aware of it.    The system would also be able to look at data at a micro and macro level, either down to individual teachers groups assessment results this year, our out to patterns which may exist across a number of years.

Almost all schools have an MIS these days however they are still very much based on their origins, that of very structured data being analysed by reports.     It is about time we looked at the potential for data warehousing, data mining and Big Data to have an impact on how data is used in schools.