Friday, 4 September 2015

Hattie Politics of Distraction and of Collaborative Expertise


These are two Pearson ‘Open Ideas’ pieces that could well have been published as one paper because the second makes less good sense unless read after the first.

Full marks to Pearson for publishing these as they aren’t entirely supportive of Pearson’s commercial interests. Just goes to show that there is always dialectic in organisations even within seeming monoliths.

In the first Hattie attempts to demolish some assumptions, developed by governments and commercial interests to underpin a distinct approach to educational policy.

His first point is that “too much discussion is focused on between-school differences when the greatest issue is the differences within schools”. He quotes this evidence that “the variance between schools, based on the 2009 PISA results for reading across all OECD countries, is 36 per cent, and variance within schools is 64 per cent”. The evidence doesn’t utterly convince me. I need to know what exactly the in-school variance is measuring. If it is the difference between the highest and lowest achieving students in the school then that isn’t necessarily (or even probably) teacher variance. If it is the difference in outcomes for different teachers within the schools then possibly that’s a variance in teacher effectiveness. A raw score comparison between a teacher with two bottom sets and one with middle and top sets doesn’t prove a difference in the quality of their teaching. This may merely be the variance in student ability. The more you consider this the harder it is to imagine analyses that convincingly demonstrate variances in the effectiveness of teachers. The fact that PISA is the source reinforces that view. As far as I know (and someone correct me if I am wrong) but PISA doesn’t record who taught the individual students taking its tests. Measuring teacher effectiveness is a very difficult problem that a great deal of research money has been thrown at, but sifting through, for example Shanker Foundation criticisms of the Gates Foundation work on teacher effectiveness it becomes obvious that there isn’t yet a reliable way to do this.

I am much more persuaded by his other policy shibboleths. The standards assertion is this; schools have low expectations so when government sets higher levels of required achievement it will lift outcomes. But as Hattie points out “in any education system with standards that are set ‘just above the average’, it is highly unlikely that all students will gain the standard, as it is not possible for all students to be ‘above the average’”. He is hard to disagree with when he states that the goal “should not be to get 100 per cent of students above the standard (unless the standards are set very low), although this is what the current politics demands of our schools”. The argument politicians deploy against this is usually emotional and not intellectual; i.e. “giving up on disadvantaged”, “tyranny of low expectations”. There could also be an argument that this approach gradually pushes up the average over time, but there are very real costs in student self-confidence and teacher morale.

Hattie also attacks policy based on narrowing gaps. He says “this is a problem, but the solution is related more to getting all students to improve – especially those just below and just above the average – and not being overly obsessed with those at the bottom”. He is concerned that a focus on the bottom end of the distribution can reinforce stereotypes by associating certain ethnic or social groups with the “tail”. All ethnic and social groups contain a spread of attainment and policy that supports the upper end can help provide role models and a more positive story about that group.

I found myself nodding in agreement at his dismissal of “flatlining”. “The notion of flatlining places too much emphasis on rankings and not enough on performance, which can be misleading, particularly when the number of countries participating in assessments such as PISA and TIMSS increases.” It is difficult not to see our own government’s use of this argument as a deliberate misleading of the public, especially when the OECD has cautioned against drawing conclusions about changing performance over time where rank places go down.

Hattie is clear that he sees no evidence for a wide range of solutions that are very popular. He states that school choice is a distraction. “Why do we provide choice at the school level when this matters far less than the choice of teacher within a school?” He is absolutely right when he states that all the evidence is that private schooling is no better than state. He is also right that class size reduction isn’t strongly indicated by the evidence as an effective strategy for school improvement. Similar beatings are visited upon curriculum change, knowledge based learning (“deep learning, or twenty-first-century skills … is a false dichotomy”), testing, new building programmes, new types of schools, transformational leaders (an approach business is also finding it hard to be weaned off). His criticism of school forms as a means to educational improvement is worth reading, especially as it is so much promoted in our country. He finds no long term impact, for example charter schools effect size “across three meta-analyses based on 246 studies is a minuscule .07”. Hattie agrees with evidence from the world of business that performance pay undermines collegiality and intrinsic motivation. As he says “the effects can be the opposite to those desired: teachers in performance-pay systems tend to work fewer hours per week and are involved in fewer unpaid cooperative activities”.

After this brutal assault on these highly favoured approaches to school development it isn’t surprising that in the second paper “What Works Best in Education: The Politics of Collaborative Expertise” he argues for a different solution. He proposes teacher professionalism, a ‘practice of teaching’ so “that there is a difference between experienced teachers and expert teachers; and that some practices have a higher probability of being successful than others”. He says “the greatest influence on student progression in learning is having highly expert, inspired and passionate teachers and school leaders working together to maximise the effect of their teaching on all students in their care”. Of course this is only to better define the problem, how to get to this Promised Land is the question. He answers that by setting out a list of tasks to get us there.

Task one is “the need to reframe the narrative away from standards and achievement and to move it towards progression”. He wants an aspiration that every child should make a year’s progress every twelve months. His second task is to define what that means, to “secure agreement about what a year’s progress looks like”. He cites New Zealand where “it is possible to go into any secondary school … and there is confidence in the comparability of how teachers evaluate challenge and progress”. This task requires teachers to undertake “a robust discussion about progression based on the teachers’ judgements of growth”.

He proposes that teachers should expect this level of progress. He states that his research shows that “the greatest influence on learning is the expectations of the students and the teachers”. Hattie argues that new assessment tools are required. He thinks that as well as measures of knowledge we also need measures of learning capability, “such as the extent to which students can engage in collaborative problem-solving, deliberate practice, interleaved and distributed practice, elaboration strategies, planning and monitoring, effort management and self-talk, rehearsal and organisation, evaluation and elaboration and the various motivational strategies – the ‘how to’ aspects of learning”.

These tools will make it possible for school leaders to involve staff and students in developing a clear understanding of the impact they are making. “Leaders need to create a trusting environment where staff can debate the effect they have and use the information to devise future innovations.” Hattie doesn’t say so but this kind of culture isn’t nurtured in a situation where teachers feel vulnerable. Grading teachers as ‘outstanding’, ‘good’ or ‘requiring improvement’ alongside performance related pay is antithetical to Hattie’s vision. Driving on even greater progress for students means teachers must be experts at ‘Diagnosis’, ‘Interventions’ and ‘Evaluation’.

“To be expert at diagnosis requires understanding what each student brings to the lesson, their motivations and their willingness to engage. To be expert at interventions requires having multiple interventions so that if one does not work with the student, the teacher changes to another. It also involves knowing the interventions that have a high probability of success, knowing when to switch from one to another and not using ‘blame’ language to explain why a student is not learning. To be expert at evaluation requires knowing the skills of evaluating, having multiple methods and working collaboratively and debating with colleagues to agree on the magnitude of the effect needed for an intervention to be successful.”

Hattie’s policy proposals are compelling. In the UK we seem to spend a great deal of time celebrating schools attended by affluent students. Parents with an interest in education compete to get their children into schools seen as successful and thereby facilitate even better results and more competition for entry. Teachers, Leaders and Government don’t know where excellent work is being done because all of this choosing and examining prevents a clear view of those teachers who do better in getting students to make good progress.

The titles of these papers raised an expectation that intrigued me and ultimately frustrated me. The first paper touches a little on the politics of school improvement, but the second doesn’t at all. Politics is about power, how it is exercised and the interests of different groups struggling for power. Hattie isn’t talking about politics in that sense. He is talking about different policy options. The analysis of whose interests are served by the distraction he describes needs to be done, because that would reveal much more about the politics of education than Hattie’s two papers do.

Stephen Heppell and Big Data

Stephen Heppell – “This is not scary, this is exciting!”
This is a short paper full of big ideas about Big Data. It is a frustrating read in many cases because (surprisingly for an academic) there is no evidence for most of the assertions.
The paper can be found here
There are some propositions I am happy to go along with, for example “it is disappointing that our measures of effectiveness, and our management data are both so poor in 2015. As a result, a lot of what we do in schools is simply convenient rather than optimal.” But the observations that follow from this seem poorly informed.  “An athlete in any sport would have a precise understanding of their nutritional requirement and the impact of various meal options on their performance”. I’m no sports nutritionist but “an athlete in any sport”? There isn’t a precise understanding of what foods make for strong performances in all sports. Cycling is the one sport I know well and here many individual elite performers often have strong historical data to help them determine what might be better foods to consume. But that often means they stick to foods that have worked well in the past. If they have never eaten kale and borlotti bean stew ahead of a day of racing they wouldn’t know how it might impact on their performance. A gifted amateur athlete outside a development programme won’t even have anything like that data. They would just have generic guidance to rely on.
This is also pretty obviously facile “as part of a research project, we asked students for indicators that their learning on a particular day was exceptional; one said “that would be how fast I eat my dinner” because he knew that on a really good learning day he would eat fast to get back to work!” What is the significance of this? The student’s own view of what is significant data doesn’t offer any help for the analyst of a data set from a wide range of different individuals. There may be many possible reasons why a student eats quickly. Where lessons are timetabled and they start at precise times there would be no reason to consume a meal speedily. Also “If a school declares that Wednesday will be a Discovery Day, a day of immersion and of project based, mixed age work, and on that day the children come to school faster and stay longer, we would have learned something important about engagement.” Would we really? Correlation isn’t the same as causation, what if that was a very wet and cold day?
In other places it’s unnecessarily opaque. For example what does this sentence mean? “Knowing that in office environments a minimum lux level for conversations would be around 250 lux, whilst for close work like typing and writing it would be above 450 lux we started exploring.” Presumably a ‘lux’ is a measure of ambient light? I don’t think the use of the term or the numbers add anything to the argument. Much simpler to say that research from Offices shows that most school exam rooms do not have an optimal level of ambient light. Stephen Heppell has been a proponent of new builds as a tool for educational improvement so it’s not a surprise that he finds deficits in buildings. These are important and shouldn’t be discounted. Levels of CO2, ambient light and noise must make a difference. The problem is that he doesn’t quantify that difference. Without quantification it’s possible that a school could spend a great deal of money to achieve only marginal gains. Knowing that a factor has a positive or negative impact on learning is just the start of a process of decision making. Suggesting that schools act quickly whenever they find one possible source of poor outcomes isn’t helpful. A phone company may find that consumers say they would prefer a titanium model, but if the extra 20% cost of the material only increases sales by 3% and reduces margins, then it’s probably not a good investment.
All of this is frustrating because the overarching suggestions make excellent sense. It is a shame that they have been padded out with examples and ideas that seem half-baked. The big problems for Big Data in schools aren’t examined at all and that is a serious weakness. There is nothing about ownership of data and how schools have to manage the increasingly rich datasets they hold. A more level headed analysis might allow that individual schools won’t be tackling Big Data in the commercial sense, at least not in the medium term. Businesses like Amazon and Google (where Big Data is a reality) have datasets with trillions of data points. Even if a school were to retain ten pieces of data on every school meal eaten in a year that would only add up to a couple of million data points for an average secondary. Even more problematic is linking that to attainment, progress and other data. It is worth pointing out that no UK state school is at all likely to be in any kind of position to achieve this in the next five years. Schools are at the start of their data analysis journey and their data is relatively small. Stephen Heppell makes the exploration of the potential of data analysis less rather than more likely by portraying highly ambitious aspirations as relatively close at hand.


 
Add to Technorati Favorites