Westminster Education Forum

Tuesday marked a very early start, setting off at 4:30am for the 40min drive to the train station before hopping on a 2 ½ hour train journey down to London.    Am sure for many this would just be part of the normal run of the average week however after almost 8 years in the Middle East during which time I never set foot on public transport, I consider it to be something new.     Mind you, some may consider Etihad and Emirates airways which are both UAE air carriers to be “public” transport at least within a Middle East context.

The overall purpose of the trip was an #edTech event being held in central London focusing on initiatives and ideas for the use of technology in education, in schools and in learning.    The event was titled “Digital Technologies and Innovative Teaching practice in the classroom: Latest thinking and policy options”.

It was nice one again to hear Stephen Heppell present and to have a brief chat with him and it was also good to hear Bob Harrison present although sadly I never had the opportunity to say hello in person.

Of particular note from the event was Stephen Heppell’s discussion of policy disconnect in which he suggested teachers being innovative charge ahead trying new ideas and new technologies, taking with them parents who see the impact of these new ideas and technologies.    The centralized policies which are determined at a governmental or similar high level however are unable to keep pace leading to a disconnect between what teachers and parents want and the policies which govern what should be happening in schools.   I can identify with this as I see so many examples of teachers trying new ideas, sharing tips, etc. with new and exciting suggestions appearing on the likes of twitter and other online media on a daily basis.     From this point of view it is important to keep innovating and Stephen even made the point of stating teachers should just “do it” and be the driving force.

Peter Twining however put across a slightly different viewpoint during one of the panel discussions where he suggested that success couldn’t be achieved purely through this bottom up process, and that it was equally important to have some top down leadership of educational technology usage.   He suggested that should the government or OFSTED indicate an expectation then schools would adjust accordingly in order to comply in order to avoid unsatisfactory inspection results.   As such for educational technology use to be truly successful we need to have both the grassroots leadership of educational technology, leading from the bottom up, but also the top down leadership setting out the framework and expectations.

I can see the merit in both approaches, top down and bottom up, however have always been quick to suggest that it is important to have both in place to make the best things happen.     That being said, in more recent months I have found myself prioritizing the grassroots bottom up approach over top down.   Good things can happen in lessons despite poor leadership however I am not as convinced good things can happen where grassroots teaching is poor.

Overall it was a productive day and well worth the early rise.    I hope to have the opportunity to attend further similar events in the future.

 

 

Variability

I just finished watching the season finale of Teen Wolf (and yes I know that possibly isn’t something I should be admitting) which has been a bit of a chaotic season.   In all honesty I am not quite sure that I fully understand all that has happened during this season however I have sat and watched it.   It kept me glued because of its unpredictability.

When looking at teaching and learning we emphasize the features which a so called “good” lesson should contain.   It should be appropriately differentiated, it should develop 21st century skills, it should foster individual and collaborative learning, it should encourage resilience and develop character, it should include a global dimension, etc.

As we attempt to do these things this might encourage a formulaic approach.   Working in some schools in the middle east I noticed a tendency for differentiation to have become almost synonymous with differentiated worksheets in some schools.    In an attempt to meet the requirements a single approach had been identified, in this case a worksheet with easy and then extended questions.

It is possible that as we endeavor to improve through identifying the things which should be in lessons we remove some of the variability in lessons.

Thinking of my own school experiences I remember a number of unique events for those teachers who I consider to have been my best teachers.   These events are remembered largely due to their uniqueness.   I remember the English teacher who removed all the tables from the class and had us sitting in a circle, something that was very uncommon at the time.    I remember the health and safety session in DT involving a rubber glove filled with tomato sauce and a bandsaw.

If we remove the variability will lessons be so engaging?   In seeking to ensure all lessons contain the elements which we deem to be important will we end up delivering lessons which are largely the same and therefore not as engaging?  Will the quest for systemic improvement lead to formulaic learning experiences which are un-engaging and the norm?

Ultimately if lessons are equated to a roll of a dice, we want to prevent students receiving a low score from their roll; a poor learning experience.    Given this we want to try to ensure that each roll results in a higher score, better learning experience, however will rolling loaded dice ultimately result in negative results despite higher scores?

 

Class sizes

This morning before walking out the front door I saw someone on a BBC morning programme suggesting that their political parties contribution to the education sector was a reduction in classroom sizes.

I find this interesting that classroom sizes continue to be considered as a measure of how good a school or an education system is.   In the case of the comments on BBC the person making the comments was equating classroom size to an improvement in the quality of education.

Hanushek (1998) suggested that the linkage between smaller class sizes and improved students results was “generally erroneous”.     Kahneman (2011) went further in suggesting that the fact associated with such a claim were “wrong”.

Kahneman’s explanation (2011) was that the reason for the findings relates to statistics and what he refers to as the “law of small numbers”.     A small class is made up of a smaller number of students which therefore results in higher levels of variability in terms of the average.    He uses an example of drawing coloured marbles from a jar to demonstrate this.   Consider randomly picking marbles from a jar containing red and blue marbles.  There is a higher probability of drawing out 3 of acolour (3 high achievers in a small class) than of drawing out 6 of a colour (6, the equivalent number of, high achievers but in a bigger class).

Within a larger class size there is a greater tendency towards regression to the mean and therefore a more stable and less variable average across schools.

The association of improved results resulting from more teacher time, more support, etc resulting from a smaller class sizes is therefore unfounded.    The improved results in schools with smaller class sizes is simply a feature of the statistical analysis of small sample sizes.   Kahneman suggests that if the researchers were to change their question and look at if poor results could be linked to small classes they would find this to be equally true.

My feeling on this is that generally class size doesn’t have a significant impact on student results within lower and upper limits.    Where the ratio is 1 teacher to 2 or 3 students I would expect to see a positive impact and equally at 50+ students I would expect to see a negative impact.   Within the larger range between 5 and 50 I would expect the impact to be minimal if evident at all.

Care needs to be taken with the use of statistics and care has to be taken in believing them.   As Kahneman explains, it is easy to create a causal explanation for why a given set of statistics such as those on class size make sense.   The ease with which a causal explanation comes to mind however doesn’t necessarily make the explanation and resulting judgement true.

Sources

Hanushek, E. A. (1998), The evidence on class size, W. Allen Wallis Institute of Political Economy

Kahneman, D. (2011), Thinking, fast and slow, Penguin Books

 

 

Gaming

The subject of schools “gaming” school league tables and performance measures such as Progress 8 has made the news recently so I have decided to contribute my opinion to the mix.    Before doing so I need to be clear that I don’t have any particularly strong views with regards this issue.  I therefore believe that my points represent a balanced viewpoint.    I will however acknowledge that my assessment of my viewpoint as balanced is based on the context as set by my viewpoint, perception and the paradigms within which I operate as an individual.   As such, from the point of view of those reading, including yourself, this may not be balanced after all.    I make no apologies for this as all I can offer is my opinion, which is never wrong in that it is my opinion and therefore is formed based on my viewpoint and context.

Back on the subject of “gaming” the discussion seems to have opposing viewpoints.   One of these viewpoints is that a school should try to offer its students the best opportunities for success in the future.   As such it is important to enable them to achieve as many successful qualifications as possible.    These schools therefore look to enroll students in qualifications which for minimal effort return successful qualification, such as ECDL.

The other viewpoint is that schools enrolling students in bulk in ECDL are doing so in order to influence league tables and performance measures such as Progress 8.    Educators taking this position are of the opinion that these qualifications are of lesser value than other qualifications which may take longer to achieve or which are more difficult to achieve yet have comparable impact on league tables and other performance measures.

For me there may be truth in both viewpoints.   If the studying of specific exams is in the interest of students’ futures then surely it is the correct thing to do.    Consider two schools which are identical in outcomes except for the fact that students in one achieve an additional ECDL qualification.    Surely this puts students who leave with an additional qualification in a more positive position.   I myself worked in a school where we delivered OCR National IT to all students.   The reason we did this was due to vocational nature of the qualification which suited out student cohort plus the breadth of study and options available which allowed us to accommodate for individual student needs and interests.

Equally there is truth in the other viewpoint in that if a school put all students in for the ECDL qualification or the OCR National they may have done so purely in the interest of achieving a better league table position than other schools.   This may put students under stress where the qualification is additional, or may represent an unfair advantage where an “easy” subject has been substituted in place of a more difficult or valued subject with an equivalent or near equivalent league table or performance measurement points worth.

Both of the viewpoints include identical actions in the batch enrolling students in a given qualification yet both viewpoints result in totally opposing opinions.    The key fact is not so much what schools do but why they do it.    In one viewpoint it is about the students and the benefit to them while in the other viewpoint it is about the school and getting the best league table or performance measure result.

If OFSTED are to clamp down on “gaming” they are therefore going to have to try and identify why a school took the chosen action.     How are they going to do this?    How are they going to measure the “intentions” of school leaders?       Are we going to start seeing OFSTED inspectors administering polygraph lie detector tests on school leaders?

I also feel that this discussion has a lesser discussed aspect to it in the value of differing qualifications.   This discussion has raged for some time on the value of so called “core” subjects and the perceived lesser value of the arts and creative subjects.      The new “gaming” discussions adds differing values in terms of the perceived difficult level of a course along with the time taken to deliver the course, with shorter courses perceived to have lesser value.   Who will decide the relative worth of each course and the total worth of any individual students curriculum of study?

We should all be working in the interests of our students to try and provide them every competitive advantage with regards Further Education or Higher Education options, or options into employment, or even more generally into their future lives.   A key part of this is the qualifications they achieve so we need to get them everything reasonably possible.   In teaching we use every trick in the book to try and make sure students are learning plus are ready and able to succeed in whatever assessment is required to achieve a given qualification.   If this is “gaming” then maybe we are all involved.

My teacher fail.

Read loads of Teacher Fails posted on Staffrm over the last few days, many of which I can identify with. The burst pen which you then unwittingly use to colour your face or colour the whole pocket side of your shirt along with the inside of your best suit. The mismatching shoes. I even split my trousers once when interviewing for a middle management position. I got the job as it happens although this may have been the result of the interview panel showing pity on me, but I digress.

The recent discussions make me reflect on a particular teacher fail from my teaching career. The lesson in question was being specially delivered for a lesson observation. Note that this was during the period when lesson observations where generally considered the best method for assessing teaching ability and therefore held some importance.

I had planned to push the boat out a little with a Computing class and get them examining how we might handle arrays of data through actually jumping around in a giant array grid I had taped to the floor before they arrived.

The idea was sound. The learning should have been engaging.

I failed to consider a couple of things. The first thing was that I hadn’t had this particular class for long and therefore they hadn’t fully became used to my active teaching style instead being more used to a passive almost lecture style approach. I also failed to consider that a senior school leader sat at the back of the classroom with a clipboard was a significant variable impacting on the potential success of the lesson.

When it came time for the students to get “engaged” they didn’t. Their nervousness at departing from the norm in terms of both being active and also in terms of such energetic behavior in front of a senior staff member, overcame any enthusiasm and excitement that might have otherwise existed. Despite my best efforts to encourage the students and drum up some excitement the lesson ended up being flat. It failed to live up my expectations.

The lesson learned from this is that it is all well and good having the best intentions regarding an active and participatory lesson however we need to give some consideration to the current norms. If students are used to being sat passive it is unlikely they will be able to directly progress to a lesson filled with student directed activities and groupwork. This particular lesson served me very well when I moved to work in the UAE where initially at least I found students very reluctant to express personal beliefs, views and feelings. There however, having learned my lesson, I went about encouraging and developing this in a more gradual way of a period of time.

On reflection it wasn’t a lesson fail, more a case of Not Yet the lesson I have hoped it would be.

Photo, Fail, by Amboo Who on Flickr

Seeking continual improvement

I am very committed to the process of continual improvement.   We live in an ever changing world with new opportunities, new people and new technologies constantly presenting themselves to us.    As such what may be considered “good enough” today is unlikely to be equally good in the new context in a years time, or possibly a months time, or maybe even tomorrow.    Due to this it is important to continually strive to improve.

The step I am currently undertaking as part of my bid to continually improve is to seek some anonymous feedback from colleagues with regards leadership where I myself am one of the leaders to which those invited will provide feedback.

Sticking your head above the parapet so to speak is never easy and never without some worry or concern with regards the feedback you may receive.

From a research perspective the responses received will be based on the interpretation of the questions being asked and then the perception of the individuals providing the feedback.    Their perception may be coloured by recent events, which due to ease of recall will appear more important than more frequently occurring events which may have resulted in an inverse response.    An individuals state of mind and emotional state on the day they provide their feedback may have an impact on the feedback they provide.    Where a person is having a good day and therefore feeling positive, they are more likely to respond in a positive fashion however where they are having a bad day, where the world is against them the opposite is also true.    If they have recently received bad news the response is also likely to be less positive.

From a statistical point of view I know there are various ways I can interpret the data with each approach potentially resulting in different findings.    A simple look at the highest and lowest average scores may seem to suggest the strengths and areas for development however a look at standard deviations may indicate a high average resulting from some widely fluctuating scores.    This initially apparent strength may therefore turn out to be either inconclusive or even an area for development.

Given all these variables it may be easier to decide to avoid asking the questions.   My choice however is to ask the questions as I would prefer to have data which may upset me rather than having no data at all.   At least if I have upsetting data I have a position to work from and to improve from as opposed to existing in blissful ignorance and therefore having no clue that things need improving.   I also have a baseline to work from in terms of checking if any actions taken have made any difference.

I await the results of the feedback with an element of trepidation and an element of anticipation.