Making Data Work
It was time to do a blog. 140 characters in Twitter were never going to cope with the subtleties of a personal reflection on using performance data in schools.
Boring subject you say? Well, I would propose that sorting out our approach to the use of performance data holds the key to several troublesome areas of current education practice, including having greater influence over school inspection, knowing where to target school improvement activities, improving professionalism, and valuing the good work that teachers do.
Data is the lowest level stuff that we use to convey measures of what pupils are awarded. Grades, levels, scores. Higher up the evolutionary tree is information. One of the great things about ICT is that it can convert data into information. List it and all you see are numbers and letters. Graph it and you have a representation of data that is easier to interpret. But we still may actually be none the wiser. This is about where we are at the moment with using performance data. The next level above that is much more tricky. It could be like an expert system; able to take information and convert it into a commentary, a diagnosis, a recommendation, a plan. That is much more interesting. Now we are looking at the possibilities of using real professional tools for serious work. The most flexible tool that schools have at the moment is Excel. It is powerful in the way that a mathematician would wish to use it but not ideal for understanding what data tells us about learning.
Turning data into action
The IT system that sits in the office of the assistant head with responsibility for standards and achievement deals with a mixture of information and data - the starting point being grades, levels, scores. If we study the data we may notice that in one class there is a group of pupils whose current progress in maths has dipped – useful information. We may deduce that the common factor is that they all came from the same primary school where they had a recruitment problem and couldn’t cover the maths scheme of work – that is the professional layer; a diagnosis to act upon. One head of maths in our project found that those pupils who attended the Revision Club achieved a whole grade higher in their maths results, and that there was wide difference in the attainment of maths sets when compared to their performance in other subjects. Can we show that pupils without a computer at home do worse in some subjects than other pupils? Can we show that the visit to the science museum in year 7 made a difference to pupil’s progress in the subject higher up the school? Do pupils with high non-verbal reasoning CATs scores do better in the arts subjects? How do boys’ and girls’ performance vary across subjects? Is the teaching across our department consistent, or perhaps there a need for some targeted CPD?
Tools are needed to highlight such variations in performance quickly, by all teachers, both retrospectively, and for pupils currently approaching their terminal year of schooling.
Once we can see and quantify a relationship between a ‘local contextual effect’ and its impact on learning at a micro-group level, then we are in a position to either take remedial action, or to further promote those things that we find can make a difference for some learners.
A focus on using tools rather than inventing them
Data is much more interesting when it can be converted from raw grades into professional knowledge. It can then form the basis for action that can lead to improvement. Of course, no data tool can do the thinking for its user; but there is plenty of scope for applications that can move the data up a few levels. That way, time-pressed assistant headteachers can re-use the time they spend on number-crunching, by acting upon what the data tells them. One of these activities wasn’t in the job spec, but there is probably one member of staff in every school who will spend more time than is healthy slaving over a hot spreadsheet and possibly even grappling with macros. This is not a good use of their time. Professionals shouldn’t have to invent their own tools.
You don’t fatten a pig by weighing it – apparently
I am not sure how often I have heard this phrase over the last few months. It sounds like wisdom to be shared, and received with a nod of agreement. But it might not be the best approach to carry on feeding pigs without checking their weight from time to time. One should do both of course – feed them and check their weight. I was reminded of this phrase reading Mike Kent’s article a while back in the TES about the ‘obsession with data gathering’. Mike is a passionate primary head and you can see his side of things. Much better to put time into teaching the children. But actually if we asked him what diagnostics teachers did to check that all pupils were making an appropriate rate of progress I am sure he would say that they assessed them in all sorts of ways, ranging from informal observations, to checking they had achieved lesson objectives, to formally assessing which levels they had reached in their subjects. Parents would expect this just as I would expect my doctor to do a few tests before prescribing a medicine. I don’t think that Mike, or any teacher, would argue against diagnostics and feedback in teaching. It is when recording data becomes a burden that it is time to ask for better systems for managing it.
Does a teacher need to be a statistician?
Many teachers feel that they should know more about data and statistics. Like the rest of us, they are naturally uncomfortable when someone presents complex graphs or talks about standard deviation. But my view is that ICT should be clever enough to make data speak the language of school improvement. Ideally, teachers shouldn’t need to be statisticians, any more than they need to be a mechanic in order to drive a car. Discomfort is a sign that something has a way to go in its development. Teachers do need to engage with diagnostics – and software developers must try to make it easier for them to do so.
Data for accountability, or improvement?
Currently, school performance data is mostly used for accountability rather than for research, diagnosis and improvement. But data needs to be made available, along with the appropriate data tools, for teachers to explore patterns of learning and diagnose how it can be made more effective in particular cases. Ofsted inspections can lead teachers to think that data is a stick to beat them with rather than be thought of as the pulse of their profession. There is a distinction too between the data that Ofsted use – historical, aggregated, nationally normalised – with the up-to-date, live, organic, locally-contextualised information already available to teachers. Judging schools solely on the basis of data about pupils no longer in the school seems slightly bizarre – perhaps like my garage wanting to fix my car on the basis of faults that were diagnosed a year ago. But it is very difficult for a group of outsiders to collect up-to-date information in just a few days. This is why last year’s data will tend to dominate the judgments made in an inspection report. Fine if nothing has changed from last year, but not so fair if schools have made big recent improvements that may not be taken into account due to a lack of time to collect the data. Best practice would suggest that a school should aim to make much greater use of the data available to them to place alongside the historical data. Parents too need to know what is happening now in their childrens’ school - not in the past. An inspection report will always be a little too late to be useful for improvement purposes. In comparison, a school’s self-evaluation report should always be up-to-date and driving improvements.
Taking a lead with using data
Many schools perceive that inspections are something that is done to schools; whereas the wise school will ensure that the school takes an active part in the process, particularly when it comes to putting forward broad evidence of how well the current pupil population is doing. A key reason for inspectors going into a school is to collect evidence on current performance. Ideally a school should be able to say “Here is how this year’s Y10 and 11 are doing, here is our analysis of what is needed to get the best from every student, and this is our evidence of our success in achieving that goal“. Data is there to be used by teachers rather than be judged by it. Many schools already use the data available to them in an effective way. These are the schools that will worry much less about what Ofsted will say about them – because they already know much more about themselves than inspectors can hope to find out in the short time that they are in school. We are moving ever closer to standardised school self-evaluation forming the main evidence base for examining the quality of school provision. The example of the new, more comprehensive, School Evaluation Form (SEF) is evidence of this. Ideally, what should accompany this move is greater support to help schools to understand the details of the Ofsted evaluation criteria, so that they can be applied consistently and confidently by schools themselves. [ n.b. Since writing this, the minister of the day has announced that, because heads complained about the complexity of the SEF, he has abolished it. Many will find this inexplicable.]
To find out if something is effective, keep asking “So What?”
When I was involved in inspections, one of the useful tips I was given was to keep asking ‘So what?’ and to see if the conversaton lead to saying how something improved learning. Of course, it would be a bit rude to put it quite like that. Instead, you asked things like, ‘Oh yes.. and what do you do then?’ It proves to be a very effective way to find out if something is effective or not. For example:
“I track progress using this results table’”: So what?
“I then put in coloured squares depending on whether they are on target or not”. So what?
“I turn it into graphs and give it out to heads of subject.” So what?
“They analyse it and write me a report.“ End of trail. Verdict: not particularly effective.
But the conversation could have run like this:
“Subject leaders use data tools to investigate how effective their teaching is.” (So what?)
“They found out that pupils in our accelerated learning pilot did better in the thinking skills component of termly tests.” (So what?)
“We have introduced thinking skills into our tutorial programme in year 9 and subject leaders are reporting greater confidence in group discussions.” End of trail. Verdict: effective.
We are now talking about diagnostics rather than tasks, impact rather than function, learning rather than schooling, pupils rather than data. It is a simple technique that you can slip into any conversation, and is so revealing of the connection between procedures and their effect. The important thing here is that data must never have a life of its own. We must always be aware that we are talking about individual pupils, and always be aware that data is only as good as what you do with it.
How does a school know when it is as good as the best?
A common issue is that schools vary quite a lot in how well they use performance data. We produced a checklist as part of our work with schools to help a school see where they might be on a journey to make data work harder for them. The checklist was made from an amalgamation of all the little things that impressed us from across the schools we visited. There is nothing absolute or scientific about this checklist: indeed, it does press for a particular approach which not everyone will agree with. The approach, once again, comes from observing good teachers. They do their best for all learners. They expect the best from every child irrespective of their background. They believe in breadth in learning, not just teaching those things that might appear in the test. They are not drawn towards the games that make their school look better than it is.
Another angle on using performance data is one of roles. Do teachers think that data is something that others use, or would they welcome seeing data on the impact of their teaching? Does the headteacher think the thrust of their work should be to improve the school’s league table position, or for a greater good? Do heads of subject think they should be responsible for standards in the teaching of their subject, or just managing their bit of the curriculum? Is there a vision of school professionalism informed by the smarter use of information, or is data something that the office staff do? But the big question is: Is the investment the school is making in data systems and the staff who use them, leading to decisions better informed by diagnosis, and leading to improvements to the outputs of the school’s work? All good food for thought. The ‘Data Confident School Toolkit’ can be found at: www.4matrix.org/toolkit How does your school look?
Why ‘In School Variation’ is ‘Education’s Greatest Challenge’
The TDA/National College draft publication ‘Reducing in-school variation: Making effective practice standard practice’ describes the position thus: “The UK education system has one of the highest levels of variation in student outcomes within the OECD. As much as 80 per cent of the variation in achievement among UK students lay within schools .”
“Described as ‘Education’s Greatest Challenge’, ‘Within-school variation’ has been pinpointed by the OECD as a key barrier to pupils’ attainment. The organisation’s figures show difference in performance between departments within British schools is four times greater than that between schools. However, the issue goes unnoticed in most schools because they either don’t have sufficiently clear information to pinpoint it or little idea what to do about it.”
‘In School Variation’ (ISV) is the variation in the quality of the experience of pupils across a school. Arguably, in the best schools where monitoring and evaluation is well-developed and effective, the consequence will be that provision will be much more consistent. The issue is most notable in secondary schools at key stage 4 where DCSF figures suggest that the range between the most and least consistent provision is as high as 16. This means that two identical pupils could study the same subject in two different schools but one pupil would receive a quality of experience that is 16 times greater in its effect than the other. Although this is difficult to quantify, it probably easily covers the GCSE range where one outcome could be A* and the other a fail. This is such a huge effect that it is surprising that politicians have not focussed on it thus far in the many school improvement initiatives that we seen arise in education.
An action-research approach to school improvement.
My hope is that this will be the year that the primacy of teachers and teaching will be recognised in the continuing series of political initiatives that can sometimes look like experimentation. (Since writing this, the government has published its White Paper ‘The Importance of Teaching’ – which places teaching at the heart of the new inspection framework. In other areas of education policy however, experimentation continues.)
There would be important support from the profession if our political leaders looked hard at the research evidence before deciding on the next school improvement initiative. It looks like an omission that there hasn’t been more training offered to help teachers understand how their work will be judged. The National College projects on Within School Variation (WSV) has identified this phenomenon – significant inconsistency in school provision – as one of the last untouched Big Issues that if tackled, holds the promise to raise standards in some schools significantly. I recently gave a presentation on WSV to some school senior leaders undertaking a Research Masters. The presentation can be downloaded by clicking this link.
Teachers TV has produced some short videos on this subject; one which one sets out the 5 Key Drivers that can reduce variation and improve the consistency of provision.
Click this link to see the video.
Tackling WSV is difficult to do because it needs the right climate to succeed. It is probably a good indicator of exceptional leadership where a school has been able to make progress on this issue. It is no surprise that the first driver identified is a school’s use of data, or as Ray Tarleton, Principal of South Dartmoor Community College, suggests “You have to understand your school’s data. It is like having an X-ray of your school“.
Why data is king
“Schools that are proactive in showing inspectors the evidence of their own pupil-level analysis and research tend to do better in their inspection.” This was a message given by Dr Mike Treadaway of the FFT at a conference in 2007 on the use of performance data.
But is it cause or effect? Does paying attention to what the data says improve schools, or are good schools also good at handling data? Either way, it is the case that there are underachieving schools which also have underdeveloped quality assurance systems. It isn’t surprising then that one of the areas of support being provided to schools in the National Challenge (under 30% A*-C) is improving their use of performance data. Schools that don’t know themselves well put themselves at risk. Ensuring that systems for monitoring and evaluation are as good as they can be should be part of any drive towards excellence. Being able to act effectively on what the evidence says is a characteristic of good leadership. If the Government were designing a national initiative to raise standards, it could probably do no worse than to focus on helping those schools that are less good at evaluation, become as good as the best.
Tools for the job
The tools that we have been working on were designed to highlight variation. They were specifically developed to allow teachers to easily examine the comparative performance of selected groups of pupils. Our approach is based on good evidence that school improvement can be based on understanding and responding to the ‘local contextual circumstances’ in which pupils learn. This contrasts with the big national subsets (gender, ethnicity, SEN etc.) that are used in categorising official performance data. Examples would include pupils doing catch-up lessons in a subject, classes which experiment with different seating arrangements, the effect of a pupil-voice project, those who have their main lesson on a Friday afternoon, pupils without a computer at home, Summer born, those that play an instrument, etc. The list is endless; but understanding the effect of local contexts can often provide the basis for an new understanding of how to improve learning. This is because it is possible to change or influence local contexts. But first we need to know which ones have an effect. Improving the impact of teaching can mean understanding and improving the context in which pupils learn, as well as how they learn. We need to be able to examine not only historical performance, but also the current factors which influence learning. We need tools to take the grind out of handling numbers and reveal meaning in ways that don’t require teachers to be statisticians.
The importance of action-research
The National College has so far run two major projects which have focussed on In-School Variation (ISV). This year the TDA has taken up its findings to produce a draft publication ‘Reducing in-school variation – Making effective practice standard practice’. Link here
This offers case study material on how schools have tackled variation; providing a significant template for schools to follow. ISV is a sensitive issue because it examines individual teacher performance. The approach and style of senior leaders towards this subject will set the climate in which differences in teaching can be explored without individual teachers feeling threatened. Professor David Reynolds has written that it was vital to establish a culture of ‘openness, trust and collegiality’ before this issue could successfully be tackled by schools. What our work has shown us is that teachers want to find out more about the impact of their teaching; but they are only comfortable to do this if it is within an a genuine climate of research, rather than for accountability. We have referred to this as an ‘action-research approach to school improvement’. When a school undertakes such work it recognises the importance of supporting teaching as a dynamic, responsive process to the needs of learners. We have had many positive comments from school leaders about the value of our attempts to support this approach. Some can be seen at www.4matrix.org/comments
The hidden school performance indicator
In-School Variation has been a long neglected school-improvement issue. It remains well hidden in most schools because schools are rarely asked about the consistency of their provision. If league tables were published which ranked schools in terms of how well they controlled variation, the rank order would be totally different to league tables today and, I would argue, much more useful. Were such tables to be made available, I as a parent would rather send my child to a school where the chances are that they would do well in whichever class they were placed in, rather than to a school with very high variation between the best and least best. There will be schools in leafy areas with high league table positions but with a greater-than-desirable variation in quality, and schools with lower points scores but placed much higher up our ISV table through providing a much more consistent learning experience for pupils. The latter school offers the better odds. Parents deserve to know more about this before they exercise a choice.
On other pages in this blog I hope to expand more on some of these ideas.
You can select them from along the top of this page. I will aim to write about why I think Ofsted are right to downplay CVA, how we came to have tea in the House of Commons tea room, what we learned from the National College, why teaching quality could become the big issue of 2011 – (and since writing this – indeed has) and why this will be no bad thing, how to survive the new inspection framework, why interoperability is both essential and a distraction, and other outrageous stories.
All pictures used on this blog are either taken by myself or licensed from stock photo publishers.
- Mike Bostock, August 2010