Evidence-based school leadership and management: A practical guide

View Original

The school research lead, data-literacy and competitive storytelling in schools

A major challenge for school leaders and research champions wishing to make the most of research evidence in their school is to make sure not only that they understand the relevant research evidence but that they also understand their school context.  In particular they need to be able to analyse and interpret their own school data.  This is particularly important when discussing data and evaluating data from within your school you are in all likelihood taking part in some form or competitive storytelling (Barends and Rousseau, 2018).   The same data will be used by different individuals to tell different stories about what is happening within the school. Those stories will be used to inform decisions within the school and if we want to improve decision-making in schools it will help if decision-makers have a sound understanding about the quality of the data, on which those stories are based

 To help you get a better understanding of the data-informed stories being told in your school, in this post I’m going to look at some of the fundamental challenges in trying to understand school data and in particular, some of the inherent problems and limitations of that data.  This will involve a discussion of the following: measurement error; the small number problem; confounding; and, range restriction   In a future, I will look at some of the challenges of trying to accurately interpret the data, with special reference to how that data is presented.

The problems and limitations of school data

Measurement error

Measurement error presents a real difficulty when trying to interpret quantitative school-data.  These errors occur when the response given differs from the real value.  These mistakes may be the result of say: the respondent not understanding what is being asked of them – e.g. an  NQT not knowing what’s being measured and how to do it ; how and when the data is collected- say at 5pm on a Friday or the last day of term; or how missing data is treated – somehow it gets filled in.  These errors may be random, although they can lead to systematic bias if they are not random. 

The small number problem

When school data are based on a small number of observations, then any statistic which is calculated from them will contain random error.  For example, in schools, small departments are more likely to report data which deviates from the true value than larger departments.   For example, let’s say we have a school where the staff turnover is 20%, a small department is likely to have a greater deviation from this 20% than a larger department.  As such, you would need to be extremely careful about drawing any conclusions about the quality of leadership and management within these departments, based on this data (that said, there may be issues and other sources of data may need to be looked at)

Confounding 

A confound occurs when the true relationship between two variables is hidden by the influence of a third variable.  For example, the senior leadership team of a school may assume that there is a direct and positive relationship between teaching expertise and pupils’ results and may interpret any decline in results as a being the result of ‘poor’ teaching. However, it might not be the teachers expertise which is the major contributory factor in determining results.  It may have been that a number of pupils – for reasons completely beyond the control of the teacher – just did not ’perform on the day’ and made a number of quite unexpected errors.  Indeed, as Crawford and Benton (2017) pupils not performing on the day for some reason or another is a major factor in explaining differences in results between year groups

Range restriction

This occurs when a variable in the data has less than the range it possesses in the population as a whole and is often seen when schools use A level examination results for marketing purposes.    On many occasions, schools or sixth form colleges publicise A level pass rates of 98, 99 or 100%.  However, what this information does not disclose is how many pupils/students started A levels and subsequently either did not complete their programme of study or who were not entered for the examination.  Nor does it state how many pupils gained the equivalent of three A levels. So, if attention is focused on the number of pupils gaining three A levels or their equivalent, then a completely different picture of pupil success at A level or its equivalent may emerge

Implications

If you want to make sure you don’t draw the wrong conclusions from school data it will make sense to:

·     Aggregate data where appropriate so that you have larger sample sizes

·     Use a range of different indicators to try and get around the problem of measurement error with any one indicator

·     Try and actively look for data which challenges your existing preconceptions – make sure all the relevant data is made captured and made available – not just that data which supports your biases.

·     Avoid jumping to conclusions – more often than not there will be more than explanation of what happened.

And finally 

Remember even if you have ‘accurate’ data, this data can still be misrepresented through the misuse of graphs, percentages, mis-use of p-values and confidence limits. 

References and further reading

Barends, E and Rousseau, D (2018) Evidence-Based Management: How to make better organizational decisions, London, Kogan-Page, 

Crawford, C. and Benton, T. (2017b). Volatility Happens: Understanding Variation in Schools’ Gcse Results : Cambridge Assessment Research Report. Cambridge, UK

Jones, G (2018) Evidence-based School Leadership and Management: A practical guide, London, Sage Publishing

Selfridge, R (2018) Databusting for Schools. How to use and interpret education data. London, Sage Publishing