Tools for Schools
In this section I aim to return to the issue of using data in a broader, more creative way in order to understand more about the process of learning. I will argue that the development of school self-evaluation goes hand in hand with how we develop our use of data to monitor the learning process.
Diagnosing the problem
If I go to my doctor he might measure my blood pressure, listen to my breathing, perhaps have the nurse do an ECG, maybe send me for a blood test or an X-ray. Armed with this information he will form a diagnosis and prescribe me the correct treatment. The parallels with the work of teachers aren’t exact but it is the case that teachers are in the business of diagnosing the needs of learners and, as far as is possible, personalising the learning to match the need. Schools are good at doing this for pupils with learning difficulties, often with the help of an educational psychologist who will do some response tests in defining the nature of the learning difficulty, and then help the school draw up a learning support plan. But for the majority of pupils, learning isn’t usually personalised to the same extent. Imagine a doctor seeing patients 30 at a time and prescribing them all a general antibiotic and a few painkillers.
Luckily it isn’t quite like that in schools because teachers do a great deal of informal and formal assessment. Teachers get expert through experience of working out how much pupils understand through the use of questioning. Listening to learners speak is a direct route into those brain functions which have assimilated that learning.
But as to professional diagnostic tools, teachers currently still have relatively few. What data analysis that takes place usually happens in the deputy head’s study. Results in a mark book are one of the best tools currently because they provide a basis for predicting future achievement from past progress. But the caricature that sums up the problem is the supposed overheard conversation at a parents evening after seeing the umpteenth year 8 parent of the evening. “If he works harder he will do better“. The powers of diagnosis have wearily shrunk back into a statement of the obvious. It is partly to do with teacher/pupil ratios that the delivery of education and the diagnosis of its impact can often be very generalised. Teaching is a compromise. But it is also the fact that teaching can sometimes be closer to the system description of ‘open-loop control’, rather than ’closed-loop control’ – where sensors provide the feedback that the system needs to be adaptable. The classic example of open-loop failure is when the teacher carries on with a lesson when pupils have actually studied the topic before – without having even established this fact. The most important sensor for the teacher is their ears, accompanied by skilled questioning and careful selection of a range of learners to provide the feedback that is needed.
In the mind’s eye – the opportunity of Learning Platforms
Understanding how the human mind learns is still one the great frontiers to educational research. Much has been done in recent years to understand how the brain functions and how schooling can be made more compatible with the organic mechanisms which make learning work. Learning is a natural bodily function for all of us; yet it is still one that we are yet to understand fully. If it were possible to get greater feedback about the workings of the mind, then teaching could be made to be dynamically adaptable. Accelerated learning techniques like those pioneered by Alistair Smith are perhaps the closest that we have in their design to an approach that is brain-compatible and therefore efficient for the many - rather than personalised to each child. To be fully personalised requires measurement of learning effectiveness at an individual level. One of the opportunities in recent years to getting closer to doing this is with the development of Learning Platforms.
Using ICT for auto-assessment
Work undertaken by the QCDA a few year’s ago in regards to online testing threw light on the ability of a computer to monitor every keystroke and do some interesting statistical calculations on not only the pupil’s choices in undertaking a task, but also the way that they carried out the task. A data-sieving process could then add together a number of elemental decisions to confirm the best match of characteristics to one ICT level or another. What it was attempting to measure was ‘capability’ – the practical application of understanding – and hence was monitoring progress in undertaking a task. Online testing was abandoned, not because this approach wasn’t sound, but because of the unreliability and suppositions about school IT systems and their availability for this task. Similar techniques are being explored in systems that can mark essays. Although many say that it takes a human mind to understand and evaluate the infinite complexities of what a student might write; theoretically, if the elemental rules behind the decisions that a teacher can make about the worth of an essay can be defined, then a computer can be programmed to apply them. In fact, I would be concerned at the suggestion that examiners might be making judgments based on unspecified rules. The main obstacle is the myriad subtleties in the meaning of language; but IT, ultimately, has no problem with vastness. Although Learning Platforms are still very new, and the curriculum content that they use may not be specifically structured to support either of the approaches just described, this is an area I would expect to develop over the next few years.
Learning Metrics – measuring the learning
One of the mysteries of ICT in schools is the stubbornness of the lack of evidence of proof of the impact that many will declare for it. The school’s IT Agency, Becta, (now closed) have been telling us for some years that we are ‘on the cusp’ of ‘transformation’ in the way that ICT can influence learning. Themes like ‘Harnessing the Technology’, or ’Fulfilling the Potential’ and ‘Next Generation Learning’ suggest that we have had more success in thinking of ways to describe this vision than we have had in achieving it. But the absence of learning metrics stands firmly in the way of those who make claims about educational transformation using ICT. Without ways to measure the impact it will remain largely a ‘belief’ – like a religion - and belief alone is probably an insufficient basis for justifying the £4 Billion spent on computers in schools over the last 30 years. I too believe in the potential for ICT to transform how people learn, and have had a close association with trying to demonstrate this assumption over many years. The essence of the problem possibly lies in the fact that too few of the skills, the processes, the insights, the competencies, and the capabilities that can develop rapidly when using the medium of ICT, are measured by the traditional methods used to assess the outcomes to schooling. The absence of a more comprehensive language of learning that embraces this range of outcomes, and the ability to quantify them, is a big part of the problem.
A learning profile, not a home for pigeons
I was sad to see Howard Garner’s work on Multiple Intelligences go out of fashion a few year’s ago. I felt that he had a good model for demonstrating another way of looking at how people may respond to learning stimulus presented to them in different ways. It was useful in the same way that we know that planetary models of atoms are probably nothing like they actually are, but they help us understand about them nevertheless. Rather than the objection to the ‘pigeon holing’ of the perceived different ’intelligences’ that killed off Gardner’s great idea, I saw it as profiling the learning preferences of individuals. I identify with people who say they are a ‘Radio 4 person’, or that they would prefer a map to a description of a route; but it is not one or the other, but relative strengths and preferences in a number of areas – and not fixed, but capable of development in any of the areas. Gardner’s model was a great way to share with teachers a practical illustration that learning needs to be multi-sensory if it is to be inclusive, and that we should place value on the full range of achievements that learners are capable of, not just those that we can measure easily, and not just those that fit the conventional view of what an ‘intelligent’ person is. Such a model can serve to remind us that schooling is highly biased towards the linguistically able. If you are good with language you are rewarded at every step in your journey through school. In fact you are rewarded ten times over when you do your GCSEs, because every examination uses the linguistic medium to measure how good you are – even in practical subjects. Renee Fuller’s quote illustrates this perfectly: “If we insist on looking at the rainbow of intelligence through a single filter, many minds seem devoid of light.” We can follow this linguistic filter through to the whole of society’s view of values and merit; including the highly differential rewards given to bankers, engineers and artisans. But that is a different story. We might speculate that it is the linguistically able who run the world because the rest don’t have the language needed to complain about it.
My attempt to look at the profiling of learner preferences was the Learning Preferences mapping exercise, a two page practical task for a school inset day which can be seen at this link. For me this was interesting because it attempted to place rough measures on attributes with no natural units. In this exercise, my profile could be Lo4, Mu3, BK 1, Vi4, Sp3, IE0, IS1, Li3. Here is something that can be expressed both mathematically and diagrammatically, and mapped over time. It also endorses the power of asking the learner, rather than always seeking to measure their progress without their involvement. Note that every characteristic learning ‘strength’ or preference can be tracked through to highly influential and well-rewarded jobs - for at least the outstanding few. Who are we to place greater value on any one of these areas compared to any other? The Richard Bransons, Paul McCartneys, David Beckhams, Damien Hirsts, Alan Sugars and Jamie Olivers of this world can show us how greatness can spring from any quarter. But for the rest of us, society’s linguistic filter will determine our success in life.
Pupils will tell you everything about school improvement – if you ask the right question
I used a similar approach to this once in doing a survey of all the factors which students felt contributed to the quality of their experience in a sixth form. It was possible to similarly map their responses into a comparative bar chart and then comment on the experiences of different sub-groups ; boys/girls, upper/lower sixth, high/low achievers etc. The results were fascinating, and influential upon future developments of sixth form provision in these schools. A conclusion I came to at that time was that pupils could tell you everything you needed to know about school improvement - if you asked the right question. To begin to measure learning and map a broad profile of how learners are responding to teaching, we need to define a much broader set of attributes than we currently use. Knowledge, Skills and Understanding - the foundation to the current philosophy regarding the outcomes to education – seem insufficient for the task that we have set ourselves if we aim to prove, for example, that ICT is capable of transformation. Transformation in what?
Steps towards a Unified Learning Profile
Over the years we have seen many schemas that have developed a particular approach and philosophy towards an aspect of learning. An example of this would be Multiple Intelligences, as described above. What these provide us with is and association of ideas arranged in some semantically-linked manner, often supported by a diagram showing the relationship between the elements. Maslows triangular Hierarchy of Needs is one diagrammatic example that sticks in most people’s mind. Hierarchical lists are also very common, as in Blooms Taxonomy. We can add to this list with Learning to Learn, Opening Minds, PLTS, 360 People etc. Each of these provide models for perceiving structure in some aspect of personal development. Barbara Prashnig’s wonderful book ‘The Power of Diversity‘ is stuffed full with useful schemas covering almost every possible perspective in the relationship between learning and the human condition. It has long been a frustration to me that each of these schemes have covered an aspect of learning rather than a ‘whole person’ view of learning - but attempting to merge them in any holistic way will prove as frustrating as trying to interconnect pieces of Lego, Fischertechnik and Meccano. The different models don’t easily fit together.
A while ago I started discussing the concept of ‘Learning Metrics’ with Roger Broadie, a leading thinker in the world of ICT. On the back of those discussions I tried to come at the problem by thinking what teachers of different subjects would say of the skills and attributes needed to get an A* in their subject. You can find answers to this if you look at the web sites of subject associations. What is interesting is that, once you have covered the national curriculum subjects, what you find that there is about 80-90% commonality between subjects. i.e. the same core of characteristics of effective specialist learners will aid them in each of their subjects. ‘Knowledge-based’ subjects have over 90% overlap in the attributes needed to get a top grade. It is only when you look at dissimilar subjects like mathematics, music, technology, PE etc that you find a richer and more-unique mix of attributes.
What this has led to is a prototype of a ‘Unified Learning Profile’. This model shows a hierarchy of groups ranging from knowing how to learn at the bottom and able to do at the top. As it stands it has defined 40 sub-attributes to learning across seven groups of skills, concepts and processes. It feels highly presumptuous to attempt to do something like this in the wake of the previous work of such giants of the education world, but the draft schema that we are sharing with schools will only meet with approval if it proves to be a useful way to help teachers and learners to better understand and manage the learning process. This schema will appear in a pupil area of a school improvement application that we have been developing with the help of schools. The big question is, if a pupil is on a ‘C’ grade and you wish them to get a ‘B’ grade, what do you say to them other then “you need to work harder?” Some of the answer will lie in a need for the student to remember more subject content; but we also need a diagnostic approach that can identify gaps in the development of the underlying attributes that can contribute to effective learning. This way we might also say “You know how to cut and paste but you need to develop more techniques for turning information into knowledge: Try writing three bullet points, or learn to give a one-sentence précis of a block of text.” Such an approach could contribute towards teaching as a more diagnostic profession through a structured approach to ‘student voice’.
Tools of the profession
Teaching is a profession with a relatively underdeveloped base of professional tools. There is scope for developing these tools so that teachers can refer to a broader diagnostic profile for the judgments that they make, and decisions that they take. Assessment data and examination results provide evidence of formal learning – that which provides the measured outcomes to schooling. Data has often stopped at providing a summary, when it is equally capable of providing a diagnosis. The opportunities come with, not only looking at individual performance, but also looking at the differences in group performance. The ability to interpret and diagnose is extended by a greater knowledge of the preferences, experiences, history and aspirations of individual pupils. We know too that developments in Learning Platforms in particular could help to extend and capture additional evidence about the process of learning and the skills and aptitudes that formal assessment cannot measure. New approaches can make teaching more data-driven, but only if we can first get the data out of the school office and onto teachers’ laptops. An unkind caricature is that a doctor has a stethoscope whereas a teacher has an opinion. Data tools are needed for teachers to back their hunches and provide the basis for their diagnosis. If teaching is a profession with status then feedback to parents must never stray anywhere near to ”If he works harder, he will do better”. There needs to be a diagnosis equivalent to that which a GP could provide to a patient. Were he to say “if you stay healthy you wont get sick” we might start to question his authority.
Too much data – too little sense
Data in education are having a bad press. According to what we read of the opinions of educationalists, there is too much data, too much form filling, and too many tick-boxes. The problems seem to be that there is a glut of elemental data, too little meaning attached to it, and that no positive purpose can be seen in processing it. Data are not embraced by schools because it can be used to judge teachers rather than to empower them.
When schools use data it is also often handled at too low a level. If we don’t use IT to make data work at our level then it tends to cause people to operate like machines. Maybe we must stop thinking of numbers and start thinking of learners. Data needs to be converted into the language of school improvement - and away from standard deviation, confidence levels and scattergrams, and into learning attributes, methods and remedies. One headteacher of an Academy involved in a trial of our tools provided this quote: ’Having lots of data is not what self-evaluation is about. What counts is having the right tools to make top-level judgments on that data.” Perfectly summed up.
A key purpose of collecting elemental data is to use it to support diagnosis and intervention. The aim of developers must be to put well-designed data tools into the hands of teachers. Tools which work like tools. A tool is designed to amplify a small amount of effort into a large change. But education often seems to be full of ‘tools’ which only give you back what you typed in. Performance Information Tools need to reveal new information from the data if they are to be useful. We need to get used to judging the value of such tools, i.e. the impact from using them. We should encourage the development of the role of those who manage data in schools from functional to affective. Displays of data can look pretty but their purpose may be simply ornamental unless you can say ‘Now I can view the data this way I can see that…” and describe something that will form the basis for intervention. Better still, the tool itself should write a terse commentary making expert use of the combination of available data, allowing the teacher to use it to support their diagnosis and subsequent action.
With tools comes more control over self-evaluation
A main design criterion is that Performance Information Tools should reveal meaning in the data that they handle, and allow the flexibility to follow-up a hypothesis without the need for the teacher to go on a course in statistics. The best-designed tools will support teacher research into effective learning. Performance Information Tools are the stethoscope of the teaching profession. They have the potential to make a teacher far more confident about how well the pupils in their classes are doing. If an inspector views that class and presents a hypothesis, the data-confident teacher can weigh it up against their own evidence. Or better still, teachers can give their evidence in advance to the inspector and tell them how things are. If all teachers in a school did this then the main task for inspectors would be, not to try to gather and process sufficient data in the short time that they are in the school, but to examine the school’s own evidence and seek to endorse their judgments. There might even be time left to offer advice. This would be much less stressful all round because there would be fewer unknowns and more solid ground for the teacher to stand on. It would also lend support to the proposal to inspect ‘good’ schools only every five years. It wouldn’t only be ‘good’ schools that could be inspected less often. It would also include schools that could demonstrate that they have effective systems for inspecting themselves.
Few organisations in the real world would willingly leave quality control in the hands of an outside body. It can lead to a misinterpretation of purpose and values - and inaccuracy - because essential performance data should never be gathered in a hurry. Schools should claim back their evaluation as their own, and professional tools that make this easier to do will aid the movement away from data-gathering inspections and into a professional validation service.
In most schools there will already be a responsibility in this general area at assistant head or deputy level. Specific training to beef-up this role as a leader of school self-evaluation would be an investment in future headteachers with these skills. One obvious focus would be to introduce a criteria-referenced lesson observation scheme. If we do a search for lesson observation prompt sheets on the Internet we will see many variations in approach and criteria used. Schools ought to be looking at lessons exactly as inspectors would, and be trained to use the evidence to make the same judgments. Making standardised school self-evaluation into a proper integrated component of school improvement work is an obvious and overdue initiative that would take a big step towards developing the professionalism of teachers and school leaders.
What advantage does the self-evaluating school have?
Schools that have well-developed self-evaluation systems that align well with the new Ofsted inspection schedule will be at a significant advantage when it comes to their inspection. Schools that encourage teacher-level research into pupil performance go a stage further. Such schools take the lead on Quality Assurance rather than leave it to an outside body. They will provide their own school level evidence of effectiveness rather than allow themselves to be judged by historical data. They will provide front-line evidence into the current work of the school, and in particular the role played by school leaders at all levels. They will be adding to the national data set that might otherwise form the main basis for their evaluation, with more up-to-date and locally-significant information about the achievements of pupils. Rather than data being an end in itself, as in league table positions, this data has names attached and is truly personalised.
Variation – an inconvenient truth
In the first section of this blog I described the OECD’s findings that the UK education system has the highest variation (inconsistency of provision) of all the OECD countries. At Key Stage 4, variation can be 12 times greater within some schools than it is generally between schools. This means that the fuss that is made about differences in the league table positions of schools is dwarfed by the differences in provision that we can find within schools. The effect of this is that in a school with low variation (consistent provision) a pupil is more likely to achieve their potential than in a school where what they achieve is dependent on which teaching group they are placed in. For a school with very high variation we might have some brilliant teaching that leads to A-C grades being attained, and some uninspiring teaching that could give rise to D-G grades, or even a fail for some pupils.
It is an issue of an entitlement for pupils to a good quality of education experience – whoever teaches them – and an entitlement for teachers to continuing professional renewal. These are absolute fundamentals for any educational institution. For a school to not know if provision is consistent is not credible if they are using data management tools effectively. Graham Silverthorne, head of Gordano School said of this point: “If we have the data and we do nothing about it, then we are part of the problem” – Teachers TV, ‘ISV and Leadership’ (link). Furthermore, to not have a dynamic CPD programme based on identifiable needs would be considered a weakness of professional leadership in any organisation.
‘In-School Variation’ (ISV) is a highly inconvenient truth that hangs over all school leaders, and the elephant in the room at every school improvement meeting. Reducing variation is an essential quality control mechanism that every serious production process should make a high priority. We may object to comparisons between schools and a production process, but schools are dealing with a commodity much more precious than pork pies – pupils’ future lives – so QA is even more important in ensuring basic fairness as a minimum, and excellence for all as an aspiration.
In 2005, The National College ran two significant projects investigating how best to tackle ISV. The findings of this work are worth reading in detail, but a very brief summary would be that: ISV is:
• An enduring school performance issue for many schools, particularly at Key Stage 4
• Significantly attributable to variation in teacher competence
• Not specifically being addressed by schools in their school improvement work
• Requires well-developed data systems to provide measures and show improvement
• Hard for schools to tackle, even with funding and support
• Requires a climate of openness, trust and collegiality
There is a short Teachers TV video on this subject at this link. In this video Ray Tarleton, Principal of South Dartmoor Community College explains how his college has tackled ISV. “If you understand your data you will have an X-ray of the school” he says. “Too often school leaders have ducked ISV because it is an uncomfortable area“. “It requires quite focussed leadership”.
I would argue that tackling ISV is a characteristic of Outstanding Leadership. ISV is, in many ways, the ultimate challenge to school leaders and also the test of the impact of that leadership in creating the circumstances where an initiative in this area could succeed. But it is also reasonable to suggest that a person occupying a top leadership post in a school should be ensuring that their school does what it says on the tin. ISV is not only an inconvenient truth but an inescapable consideration for the serious, self-evaluating, data-confident school leader.
Improving self-evaluation means using data better
We might expect that the use of performance data is going to develop and mature - because there are too many benefits to the core business of schools for it to remain as a functionary activity. But the implications for doing it properly means placing a focus on teaching effectiveness, identifying the ingredients which support effective learning, and developing a greater understanding of how we can evaluate the learning process. It means an increase in the nature of the professionalism of teaching and a stronger academic basis for doing what we do. Teaching is not like driving a bus, it is much closer to general medical practice. Teaching needs to become a more diagnostic profession, and teachers need the new tools that will define their professional standpoint over the next period of educational improvement.
This was to be an end of sorts to these thoughts. But there are still many loose ends in my mind. If this was a proper book I would be rewriting it now. I need to try to write some more about the problems of distortion caused by league tables and target setting. I also want to return to how data tools might be further developed. Please comment on any of these thoughts. It is a learning process for me.