Friday 15 July 2016

School Management Information Systems

This is a subject that’s engaged me for many years. Unlike almost all the other posts in this blog this one isn’t inspired by a report or piece of research I’ve read. There seems to me to be several key issues in this area.

The first of these is the monopoly that Capita’s SIMS has over the schools’ MIS market. In 2013 SIMS was still used by 80% of schools in England and Wales (see here). This monopoly hasn’t been significantly affected by the appearance of competitor products, for example Progresso and Arbor. This monopoly and market domination is the primary cause of other key problems. Capita SIMS sits astride the schools’ MIS market without any competition significant enough to drive down their prices. As a result Capita SIMS costs are very high. Not only are their costs high for the schools using their software but they also charge other suppliers for the right to write to their data systems.  These costs suppress the development of innovative add-ons to SIMS.

Innovation is also suppressed because of the high costs of market entry and the difficulty in getting any market share when up against such a dominant competitor. School MIS systems are very complex. They need to include data fields relating to a wide range of student assessment items across phases and sectors (primary, secondary, special, independent and state). Data fields relating to parents and families, to staff and students are very numerous. Alongside these are the complexities of timetables and financial management. There are also behaviour data, attendance and systems to track students’ performance in these areas. Most of these need to be customisable to varying degrees to suit the local policies and systems of individual schools.

New competitors also have to cope with the rapidly changing demands of government regarding data reporting from schools. Teachers, Heads and MIS providers all suffer from the ever moving goalposts that the DfE have on wheels. Probably the recent change in Secretary of State will soon yield a new set of requirements. Every time that the DfE decides they need a new data item reporting, for example a phonics score for every year 2 student, suppliers of MIS systems have to update their product. This is an overhead that must be challenging to manage.

Right from the very beginnings of SIMS as an amateur development project the interface has always lagged behind the best software. The origins of the product were focused on providing schools with ways of collecting, holding and understanding the main datasets that related to school performance, and not to empowering teachers in the classroom with solutions that significantly reduced their workload and increased their efficiency. Some developments have done so. The introduction of e-registration (led by Bromcom originally I believe) made life much easier for teachers and administrators. Instead of having to trawl through paper registers each morning looking to see which students are absent it’s now pretty much automated process to identify missing students and message parents. But it has been painfully slow. Only in the last few years have tablet apps that allow teachers to quickly record behaviour incidents appeared. The potential to do so has been with us for at least 4 years on tablets and more like 10 years with laptops.

Local installations are still extraordinarily common for MIS systems. This makes it much more difficult for schools to ensure the availability of their MIS. Even small primary schools need local technical support expert enough to maintain the MIS server, update it and back it up. A number of years ago I went to a local primary who had lost their SIMS server to hardware failure. They had a local support contract but had never checked that included back up of the SIMS database. So not only had they lost the server but they also had lost all historical data.

The potential for MIS systems to transform the ways schools function is enormous, the surface has only be scratched. When data is in the Cloud there is potential for schools to learn from others in the same system.  How helpful would it be if your MIS was able to point you at other departments in other schools where students with a very similar profile (in terms of prior performance, social characteristics and attendance) were achieving better outcomes? Wouldn’t it be great if you could see what strategies had been successful before with a student who was misbehaving in your class? I’d like to see seating plans and behaviour systems showing you which students have the best chance of working without incident together. It would be very useful if teachers were able to track the effectiveness of homework.  If they could assign a category or tag to a homework, for example as extended writing or reflective writing and then look back over the term they could see which types had the best completion rates. They might be able to see what types of homeworks led to better results in end of module tests. Including lesson planning in an MIS would also allow tracking of results; in other words which categories of lessons had the highest or lowest rates of behaviour incidents, or best end of module test outcomes? Obviously this kind of data wouldn’t always provide clear answers but aggregated across departments it might lead to some very useful and well informed professional discussions. In the 15 years I spent as a teacher all these professional discussions were always based on hunches and anecdotes not data.

Teachers are the very best source of ideas about how to innovate MIS. But you have to ask them the right questions. If you ask them how their present MIS might be improved you’ll probably get some good ideas about enhancements to the interface, shortcuts that reduce the number of steps to complete a task and complaints about illogical nomenclature. But ask them what takes up inordinate amounts of their professional time or what they would really like to be able to do with data and with a good understanding of both teaching and data you could begin to develop some exciting innovations.

So how can this situation be unpicked? At the level of government policy intervention in the marketplace might be very helpful. With our present administration with its high opinion of the power of free markets this seems unlikely.

Competitors need to find ways to offer a very compelling alternative to Capita SIMS. Schools are generally pretty conservative (small ‘c’) so changing their MIS isn’t something they consider very frequently. The present pressure on school budgets might make this happen a little more as schools get around to looking very critically at all their areas of spending. A big barriers is the institutional costs of changing. Unless the new system is very intuitive and simple there will be very big costs both for staff training and lost productivity when staff need help remembering how to use this new system.  If Capita didn’t take a fee from systems building on theirs there might be ways to stealthily eat out SIMS from the inside.  You provide one compelling add-on after another until there is very little the school uses of the original SIMS system below your products. Then they are ripe to be transitioned completely away from SIMS. Another big problem for the kind of Cloud based opportunities I mentioned earlier is that they become more and more powerful as the user base increases. With several thousand schools there are very real benefits from being able to learn from other schools, but that isn’t true if you are the third to use it.

Perhaps a far sighted capitalist will see the opportunities here and invest in the development of a product that takes more than a decade to produce real returns? If so we could see some very exciting innovations in MIS functionality.


Flipping Alone isn’t the Answer

This is a piece of research that appeared in CBE—Life Sciences Education in July 2015 (available here). The study looked at the performance of students studying for a Biochemistry Major at the University of Massachusetts–Amherst. Over a five year period the researchers captured student performance in online homework activities as well as end of semester tests. The study looked at the performance of 489 students over the whole period. Of these 244 engaged in active learning in the face-to-face sessions, this meant using “personal-response hardware in class” or “student–student interactions facilitated by instructor” and “team-based, collaborative student interactions in class”. The chief conclusion of the paper is that a combination of flipped learning alongside active learning in class made a significant difference to outcomes in end of semester tests. As they say this approach “encourages students to become more engaged with course material, persist in their learning through more timely and accurate preparation, and, ultimately, perform better”. The effect is greater “for lower-GPA students and female students”.

Another interesting corollary to the research is the context of the study. “The initial impetus to convert the course described here from a standard lecture format to the flipped format was to keep class sizes from growing (due to increasing numbers of student majors) without substantially increasing the in-class time commitment of the instructor.” In other words as well as improving outcomes the approach reduced the face-to-face commitments of instructors. But this “increase in instructor efficiency is counterbalanced by the need for extensive development of online material on the part of the instructor, although that effort rapidly diminishes after the first offerings of the flipped course”. After a substantial initial investment in instructor time (and presumably some training for these staff) to create the online resources, less resources were then required to achieve better results. This study was in the United States and took place within a STEM course at HE and the numbers involved are relatively small. Allowing for these provisos this research should be prompting other HE providers to look at investigating the benefits of such an approach.

Monday 13 June 2016

Research on London Challenge



Tony McAleavy and Alex Elwick, School Improvement in London: A Global Perspective, CfBT Education Trust, 2015

The fact that as this study says “London schools have improved dramatically since 2000” and that I ended my teaching career in London in July 2001 are surely not causally linked. Correlation and causation again. Although it is hard to resist feeling like I left just before the party really got going.

This study builds upon an earlier work from CfBT looking in detail at the causal factors underpinning the improvement in London schools between 2000 and 2012.

The study assumes that “what has changed is the internal effectiveness of the schools”. There is a quite cursory dismissal of the possibility of the changes in schools resulting from factors external to the school system. 19 lines without a single reference or footnote are devoted to an examination of this possibility. It seems a weakness in the research to begin by discounting one set of possible causes without examining any evidence. The authors may be right, they may be wrong to make this assertion, but this study does not help the disinterested reader examine the question.

Nicky Morgan (and probably Michael Gove) should they read this research will be delighted at how it confirms the direction of government educational policy after 2010. Look at the key factors identified as “enabling” the success:

  • “The power of data”, where “the growth in the use of education performance data and improved data literacy among education professionals” has been highly significant.
  • “The importance of professional development”, particularly because “training became increasingly the responsibility of practitioners rather than expert advisers who had left the classroom”.
  • “The contribution of educational leaders” they argue was significant not just because good leaders were recruited but “the most significant aspect of the London story was the emergence of the best headteachers as system leaders. … These outstanding headteachers were able to provide highly effective coaching support to other schools.” Just in case you were too dim to spot the reference they add that “the idea of Consultant Leaders has been adopted at national level” by the present government.
  • “The significance of sustained political support”, so that the strategies were given time and support. In fact they helpfully observe “Teach First and the academies programme continue to this day”.

The Statistical Improvements

I am not impressed by the authors citing both improved GCSE results and better Ofsted inspection outcomes as two separate and independent variables indicating progress. The former dictates the latter as any analysis of the data shows. The improvements in attainment of “high-poverty background students” is impressive (although it’s a shame this category isn’t defined). I would like to know if there have been changes in the proportion of these students within individual school populations or across inner or outer London as a whole. There is evidence that where students from deprived populations make up a small minority of a school they experience a smaller deficit in attainment. The authors are more impressed than I am by the changes to the gaps between disadvantaged attainment and other students, when London is compared to the rest of the UK. There have been improvements nationally and while the difference is smaller in London it is hard to know whether this is due to some extreme outliers in the UK wide data or not. As in some other parts of this report, the unwillingness of the authors to look seriously at possible criticisms or alternate views ultimately weakens rather than strengthens their argument.

iPad Research



I've been undertaking research into 1-1 iPad projects as part of my present role. In order to get some context I've been reading past research. There isn't a great deal of work yet done on iPads so the number of papers was pretty small. If there is anyone reading this who is aware of good research work please let me know. The following are my summaries of some of the more interesting pieces.


Rana M. Tamim, Eugene Borokhovski, David Pickup and Robert M. Bernard, Large-Scale, Government Supported Educational Tablet Initiatives

This isn’t a review of large scale educational initiatives in tablet computing in schools, but an extended rant. That the rant is justified is also very clear. What ever happened to the idea that policy should be evidence based and rigorously analysed? There is precious little evidence in this study that it is part of global approaches to technology in education. Massive funds are being spent without any clarity about why or what the outcomes were.

The study starts by admitting that there’s plenty of evidence that technology can enhance outcomes. Tablets are currently the most fashionable educational technology initiative. The study sets out to answer these questions:

· What explicit and implicit factors are motivating governments to launch tablet initiatives?

· What financial and organisational models are governments using to implement their tablet initiatives?

· What are the intended educational outcomes of the tablet initiatives?

· To what extent are the tablet initiatives aligned with educational policies and strategies?

· To what extent has the use of tablets been integrated with the curriculum?

· What provisions have been made to develop or provide access to relevant educational content on the tablets?

· What provisions have been made for teacher, student and parent preparation for the use of the tablets?

For an overall conclusion this is pretty damning; “the task proved to be more challenging than expected because of the limited amount of publicly available information, the overall findings of the review confirm the original assumption: that the majority of the tablet initiatives are launched with a hasty and uncalculated approach, often weak on the educational, financial or policy front.” p. 21

Questions and Answers

What explicit and implicit factors are motivating governments to launch tablet initiatives?

The report is very scathing, “the stated objectives included catchphrases and buzzwords that may have been more fitting for public relations and political campaigns than for educational reform actions” p. 23

What financial and organisational models are governments using to implement their tablet initiatives?

The report indicates that published material about this aspect of these initiatives was very limited. It points out some enormous discrepancies, for example both Jamaica and Turkey spent $1.4 billion on tablets but the former supported only 24,000 students whereas the latter helped over 10 million. The report doesn’t analyse this further but it’s hard to understand how Turkey managed to achieve anything significant with $140 per student. Equally it is hard to see how Jamaica invested c. $58000 per student even with training and infrastructure spending.

Educational Factors

Probably because they could find so little hard data the final five questions collapse into one section in the report. They don’t mince their words “the initiatives focused on the hype around tablets and not on their use as a tool to achieve an educational goal” p. 24.

This is very frustrating for someone involved in educational technology. Clearly governments are spending money on tablets but without any transparency about the educational aims, financial systems or impact these projects involve. It is easy to assume that’s because the thinking hasn’t been done. It’s also a criminal waste of money to carry through these projects without maximising the learning they generate.

Haßler, B., Major, L. & Hennessy, S., Tablet use in schools: A critical review of the evidence for learning outcomes, Journal of Computer Assisted Learning, June 2015

Just when I was despairing about tablets, education and intelligent analysis this paper came to my attention.

Unlike the work on large scale tablet initiatives it is much smaller scale and teacher facing in its focus and all the better for it. The study attempts to uncover research that looks closely at how tablets can impact on learning. As it says “the fragmented nature of the current knowledge base, and the scarcity of rigorous studies, make it difficult to draw firm conclusions” (p. 1). But there are some interesting pointers and useful distinctions in this study that make it worth reading.

The authors carried out a search of published material using strict criteria that gave them just 23 studies. Four of these involved less than 10 subjects. These limitations are indicative of the lack of rigorous research on this topic.

The published studies were mostly positive about the impact of tablets:

· 16 reported positive learning outcomes;

· 5 reported no difference; and

· 2 reported negative outcomes.

Looking at the positive results the paper finds a number of factors that seem to contribute to successful outcomes. These are:

· high usability and integration of multiple features within one device;

· easy customisation and supporting inclusion;

· touch screen; and

· availability and portability.

The authors delineate some practical considerations. It won’t surprise anyone to read that “effective technology management is critical to the successful introduction of tablets and this should be underpinned by sound change management principles” (p. 13). Also it seems evident that “a robust wireless infrastructure, with sufficient capacity to accommodate entire classes of tablets connecting simultaneously” (p. 14) is essential. Good cases are needed for “younger children” (P. 14).

Less tangible factors are also identified. They state that “a supportive school culture that fosters collegiality and teacher empowerment at different levels can be pivotal for the effective introduction of tablets” (p. 13).

It is interesting how differing schemes for distributing tablets impacted on outcomes. “In the one-to-one setting [that is one tablet per student], there is no competition for tablets among students, and in the studies reviewed there was consistently high group participation, improved communication and interaction. However, the many-to-one groups [i.e. many students to one tablet] generated superior artefacts as all the notes were well discussed among the group members” (p. 15). This finding challenges the common sense idea that one-to-one schemes are better.

The authors practical approach is admirable, for example they note that the “trade-off between number of devices, screen size, cost, and corresponding effective learning scenarios, remains completely unexplored in the research literature” (p. 18). They point a fruitful forward path for tablet research that focuses in on the classroom and school issues that might make the difference between the success or the failure of a procurement.



Kevin Burden, Paul Hopkins, Dr Trevor Male, Dr Stewart Martin, Christine Trala, iPad Scotland Evaluation, 2012

This piece of work is now quite old. The title may give the impression that this is wide study. It isn't. The evidence for the work comes from just 3 secondaries and 5 primaries, mostly authorities in or neighbouring Edinburgh (apart from one school in Aberdeen). The work involved interviews with staff and students. There wasn't any quantitative data.

They found that "teachers noted that ubiquitous access to the Internet and other knowledge tools associated with the iPad altered the dynamics of their classroom and enabled a wider range of learning activities to routinely occur than had been possible previously". Having iPads also "encouraged many teachers to explore alternative activities and forms of assessment for learning".

Students were found to show "increasing student levels of motivation, interest and engagement", "greater student autonomy and self-efficacy" and "more responsibility for their own learning".

Apparently "little formal training or tuition to use the devices was required by teachers; they learned experientially through play and through collaboration with colleagues and students".

There is a great deal more detail in the study and if you are considering iPads it is worth reading in full.


Finally I should reference Donald Clark's unrivalled unequivocal dislike of tablets in schools, for a counter to any positive impressions created above, for example here.

Friday 4 September 2015

Hattie Politics of Distraction and of Collaborative Expertise


These are two Pearson ‘Open Ideas’ pieces that could well have been published as one paper because the second makes less good sense unless read after the first.

Full marks to Pearson for publishing these as they aren’t entirely supportive of Pearson’s commercial interests. Just goes to show that there is always dialectic in organisations even within seeming monoliths.

In the first Hattie attempts to demolish some assumptions, developed by governments and commercial interests to underpin a distinct approach to educational policy.

His first point is that “too much discussion is focused on between-school differences when the greatest issue is the differences within schools”. He quotes this evidence that “the variance between schools, based on the 2009 PISA results for reading across all OECD countries, is 36 per cent, and variance within schools is 64 per cent”. The evidence doesn’t utterly convince me. I need to know what exactly the in-school variance is measuring. If it is the difference between the highest and lowest achieving students in the school then that isn’t necessarily (or even probably) teacher variance. If it is the difference in outcomes for different teachers within the schools then possibly that’s a variance in teacher effectiveness. A raw score comparison between a teacher with two bottom sets and one with middle and top sets doesn’t prove a difference in the quality of their teaching. This may merely be the variance in student ability. The more you consider this the harder it is to imagine analyses that convincingly demonstrate variances in the effectiveness of teachers. The fact that PISA is the source reinforces that view. As far as I know (and someone correct me if I am wrong) but PISA doesn’t record who taught the individual students taking its tests. Measuring teacher effectiveness is a very difficult problem that a great deal of research money has been thrown at, but sifting through, for example Shanker Foundation criticisms of the Gates Foundation work on teacher effectiveness it becomes obvious that there isn’t yet a reliable way to do this.

I am much more persuaded by his other policy shibboleths. The standards assertion is this; schools have low expectations so when government sets higher levels of required achievement it will lift outcomes. But as Hattie points out “in any education system with standards that are set ‘just above the average’, it is highly unlikely that all students will gain the standard, as it is not possible for all students to be ‘above the average’”. He is hard to disagree with when he states that the goal “should not be to get 100 per cent of students above the standard (unless the standards are set very low), although this is what the current politics demands of our schools”. The argument politicians deploy against this is usually emotional and not intellectual; i.e. “giving up on disadvantaged”, “tyranny of low expectations”. There could also be an argument that this approach gradually pushes up the average over time, but there are very real costs in student self-confidence and teacher morale.

Hattie also attacks policy based on narrowing gaps. He says “this is a problem, but the solution is related more to getting all students to improve – especially those just below and just above the average – and not being overly obsessed with those at the bottom”. He is concerned that a focus on the bottom end of the distribution can reinforce stereotypes by associating certain ethnic or social groups with the “tail”. All ethnic and social groups contain a spread of attainment and policy that supports the upper end can help provide role models and a more positive story about that group.

I found myself nodding in agreement at his dismissal of “flatlining”. “The notion of flatlining places too much emphasis on rankings and not enough on performance, which can be misleading, particularly when the number of countries participating in assessments such as PISA and TIMSS increases.” It is difficult not to see our own government’s use of this argument as a deliberate misleading of the public, especially when the OECD has cautioned against drawing conclusions about changing performance over time where rank places go down.

Hattie is clear that he sees no evidence for a wide range of solutions that are very popular. He states that school choice is a distraction. “Why do we provide choice at the school level when this matters far less than the choice of teacher within a school?” He is absolutely right when he states that all the evidence is that private schooling is no better than state. He is also right that class size reduction isn’t strongly indicated by the evidence as an effective strategy for school improvement. Similar beatings are visited upon curriculum change, knowledge based learning (“deep learning, or twenty-first-century skills … is a false dichotomy”), testing, new building programmes, new types of schools, transformational leaders (an approach business is also finding it hard to be weaned off). His criticism of school forms as a means to educational improvement is worth reading, especially as it is so much promoted in our country. He finds no long term impact, for example charter schools effect size “across three meta-analyses based on 246 studies is a minuscule .07”. Hattie agrees with evidence from the world of business that performance pay undermines collegiality and intrinsic motivation. As he says “the effects can be the opposite to those desired: teachers in performance-pay systems tend to work fewer hours per week and are involved in fewer unpaid cooperative activities”.

After this brutal assault on these highly favoured approaches to school development it isn’t surprising that in the second paper “What Works Best in Education: The Politics of Collaborative Expertise” he argues for a different solution. He proposes teacher professionalism, a ‘practice of teaching’ so “that there is a difference between experienced teachers and expert teachers; and that some practices have a higher probability of being successful than others”. He says “the greatest influence on student progression in learning is having highly expert, inspired and passionate teachers and school leaders working together to maximise the effect of their teaching on all students in their care”. Of course this is only to better define the problem, how to get to this Promised Land is the question. He answers that by setting out a list of tasks to get us there.

Task one is “the need to reframe the narrative away from standards and achievement and to move it towards progression”. He wants an aspiration that every child should make a year’s progress every twelve months. His second task is to define what that means, to “secure agreement about what a year’s progress looks like”. He cites New Zealand where “it is possible to go into any secondary school … and there is confidence in the comparability of how teachers evaluate challenge and progress”. This task requires teachers to undertake “a robust discussion about progression based on the teachers’ judgements of growth”.

He proposes that teachers should expect this level of progress. He states that his research shows that “the greatest influence on learning is the expectations of the students and the teachers”. Hattie argues that new assessment tools are required. He thinks that as well as measures of knowledge we also need measures of learning capability, “such as the extent to which students can engage in collaborative problem-solving, deliberate practice, interleaved and distributed practice, elaboration strategies, planning and monitoring, effort management and self-talk, rehearsal and organisation, evaluation and elaboration and the various motivational strategies – the ‘how to’ aspects of learning”.

These tools will make it possible for school leaders to involve staff and students in developing a clear understanding of the impact they are making. “Leaders need to create a trusting environment where staff can debate the effect they have and use the information to devise future innovations.” Hattie doesn’t say so but this kind of culture isn’t nurtured in a situation where teachers feel vulnerable. Grading teachers as ‘outstanding’, ‘good’ or ‘requiring improvement’ alongside performance related pay is antithetical to Hattie’s vision. Driving on even greater progress for students means teachers must be experts at ‘Diagnosis’, ‘Interventions’ and ‘Evaluation’.

“To be expert at diagnosis requires understanding what each student brings to the lesson, their motivations and their willingness to engage. To be expert at interventions requires having multiple interventions so that if one does not work with the student, the teacher changes to another. It also involves knowing the interventions that have a high probability of success, knowing when to switch from one to another and not using ‘blame’ language to explain why a student is not learning. To be expert at evaluation requires knowing the skills of evaluating, having multiple methods and working collaboratively and debating with colleagues to agree on the magnitude of the effect needed for an intervention to be successful.”

Hattie’s policy proposals are compelling. In the UK we seem to spend a great deal of time celebrating schools attended by affluent students. Parents with an interest in education compete to get their children into schools seen as successful and thereby facilitate even better results and more competition for entry. Teachers, Leaders and Government don’t know where excellent work is being done because all of this choosing and examining prevents a clear view of those teachers who do better in getting students to make good progress.

The titles of these papers raised an expectation that intrigued me and ultimately frustrated me. The first paper touches a little on the politics of school improvement, but the second doesn’t at all. Politics is about power, how it is exercised and the interests of different groups struggling for power. Hattie isn’t talking about politics in that sense. He is talking about different policy options. The analysis of whose interests are served by the distraction he describes needs to be done, because that would reveal much more about the politics of education than Hattie’s two papers do.

Stephen Heppell and Big Data

Stephen Heppell – “This is not scary, this is exciting!”
This is a short paper full of big ideas about Big Data. It is a frustrating read in many cases because (surprisingly for an academic) there is no evidence for most of the assertions.
The paper can be found here
There are some propositions I am happy to go along with, for example “it is disappointing that our measures of effectiveness, and our management data are both so poor in 2015. As a result, a lot of what we do in schools is simply convenient rather than optimal.” But the observations that follow from this seem poorly informed.  “An athlete in any sport would have a precise understanding of their nutritional requirement and the impact of various meal options on their performance”. I’m no sports nutritionist but “an athlete in any sport”? There isn’t a precise understanding of what foods make for strong performances in all sports. Cycling is the one sport I know well and here many individual elite performers often have strong historical data to help them determine what might be better foods to consume. But that often means they stick to foods that have worked well in the past. If they have never eaten kale and borlotti bean stew ahead of a day of racing they wouldn’t know how it might impact on their performance. A gifted amateur athlete outside a development programme won’t even have anything like that data. They would just have generic guidance to rely on.
This is also pretty obviously facile “as part of a research project, we asked students for indicators that their learning on a particular day was exceptional; one said “that would be how fast I eat my dinner” because he knew that on a really good learning day he would eat fast to get back to work!” What is the significance of this? The student’s own view of what is significant data doesn’t offer any help for the analyst of a data set from a wide range of different individuals. There may be many possible reasons why a student eats quickly. Where lessons are timetabled and they start at precise times there would be no reason to consume a meal speedily. Also “If a school declares that Wednesday will be a Discovery Day, a day of immersion and of project based, mixed age work, and on that day the children come to school faster and stay longer, we would have learned something important about engagement.” Would we really? Correlation isn’t the same as causation, what if that was a very wet and cold day?
In other places it’s unnecessarily opaque. For example what does this sentence mean? “Knowing that in office environments a minimum lux level for conversations would be around 250 lux, whilst for close work like typing and writing it would be above 450 lux we started exploring.” Presumably a ‘lux’ is a measure of ambient light? I don’t think the use of the term or the numbers add anything to the argument. Much simpler to say that research from Offices shows that most school exam rooms do not have an optimal level of ambient light. Stephen Heppell has been a proponent of new builds as a tool for educational improvement so it’s not a surprise that he finds deficits in buildings. These are important and shouldn’t be discounted. Levels of CO2, ambient light and noise must make a difference. The problem is that he doesn’t quantify that difference. Without quantification it’s possible that a school could spend a great deal of money to achieve only marginal gains. Knowing that a factor has a positive or negative impact on learning is just the start of a process of decision making. Suggesting that schools act quickly whenever they find one possible source of poor outcomes isn’t helpful. A phone company may find that consumers say they would prefer a titanium model, but if the extra 20% cost of the material only increases sales by 3% and reduces margins, then it’s probably not a good investment.
All of this is frustrating because the overarching suggestions make excellent sense. It is a shame that they have been padded out with examples and ideas that seem half-baked. The big problems for Big Data in schools aren’t examined at all and that is a serious weakness. There is nothing about ownership of data and how schools have to manage the increasingly rich datasets they hold. A more level headed analysis might allow that individual schools won’t be tackling Big Data in the commercial sense, at least not in the medium term. Businesses like Amazon and Google (where Big Data is a reality) have datasets with trillions of data points. Even if a school were to retain ten pieces of data on every school meal eaten in a year that would only add up to a couple of million data points for an average secondary. Even more problematic is linking that to attainment, progress and other data. It is worth pointing out that no UK state school is at all likely to be in any kind of position to achieve this in the next five years. Schools are at the start of their data analysis journey and their data is relatively small. Stephen Heppell makes the exploration of the potential of data analysis less rather than more likely by portraying highly ambitious aspirations as relatively close at hand.


Friday 6 March 2015

Authentic Learning for the Digital Generation by Angela McFarlane

Angela MacFarlane covers a wide territory in this book, both chronologically and thematically. She draws upon research from the early 1990s through to the present and finds value in all of these. She looks at an extensive breadth of educational technology themes, from esafety to data representation. The journey swoops in to look in detail at some of these themes but with others takes a much higher level view of the landscape. For example she describes the intricacies of how teachers need to be most careful in developing activities that involve the representation of data and yet describes broad issues around user-generated content in learning.
I’m sure I’m not alone in finding the online manufacturing of a debate about skills versus knowledge intensely tedious. McFarlane dispatches these spurious disagreements with aplomb; “Given that it is impossible to work with information in a vacuum, you have to have some content to work with, and content knowledge without the understanding to use it effectively is pretty pointless, the skills vs. knowledge dichotomy breaks down on even cursory examination”.
I like as well her warnings against digital utopianism and very practical insights, for example when looking at the widespread ownership of devices amongst school age learners she cautions that “it is dangerous to assume this physical access equates to deep and meaningful use of these technologies”.

A useful book brimming with wise, experienced and penetrating insights into every one of the themes she examines, it isn’t an argument or a coherent narrative but is worthwhile nevertheless. For example her insistence in many places that it isn’t the technology but how it is used that matters should be at the forefront of the mind of anyone beginning a procurement of technology. There are gaps, for example her chapter on games neglects the highly fruitful new study of gamification. There is one error, Becta was disbanded in 2011 not 2007 as she states. These are minor quibbles, for someone wanting to have a good understanding of key issues relating to technology in education this is great starting point.
 
Add to Technorati Favorites