Useful sources of information for teachers, school research leads and senior leaders

No doubt as the summer holidays draw to a close and the new term approaches, there will be teachers, school research leads and senior leaders who will be preparing to deliver a start of term INSET/CPD session, which might have as a focus, evidence-informed practice. So to help those colleagues with their preparation for such as session, I thought it might be useful to share a range of resources - books, blogs, resources available online and institutional webites which colleagues might find useful in their preparations. It’s not an exhaustive list of resources, on the other hand, it might point you in the direction of something which helps you deliver a session which has real value for colleagues. So here goes:


Ashman, G. (2018) The Truth about Teaching: An evidence-informed guide for new teachers. London: SAGE.

Barends, E. and Rousseau, D. M. (2018) Evidence-based management: How to use evidence to make better organizational decisions. London: Kogan-Page.

Brown, C. (2015) ‘Leading the use of research & evidence in schools’. London: IOE Press.

Cain, T. (2019) Becoming a Research-Informed School: Why? What? How? London: Routledege.

Didau, D. (2015) What if everything you knew about education was wrong? Crown House Publishing.

Hattie, J. and Zierer, K. (2019) Visible Learning Insights. London: Routledge.

Higgins, S. (2018) Improving Learning: Meta-analysis of Intervention Research in Education . Cambridge : Cambridge University Press.

Kvernbekk, Tone. 2015. Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London: Routledge.

Netolicky, D. 2019. Transformational Professional Learning: Making a Difference in Schools. London: Routledege.

Petty, G. (2009) Evidence-based teaching: A practical approach. Nelson Thornes.

Weston, D. and Clay, B. (2018) Unleashing Great Teaching: The Secrets to the Most Effective Teacher Development. Routledge.

Wiliam, D. (2016) Leadership for teacher learning. West Palm Beach: Learning Sciences International.

Willingham, D. (2012) When can you trust the experts: How to tell good science from bad in education. San Francisco: John Wiley & Sons.


Rebecca Allen

Christian Bokhove

Larry Cuban

Centre for Evaluation and Monitoring

Harry Fletcher-Wood

Blake Harvard

Ollie Lovell

Alex Quigley

Tom Sherrington

Robert Slavin

Other resources

Barwick M. (2018). The Implementation Game Worksheet. Toronto, ON The Hospital for Sick Children

CEBE (2017) ‘Leading Research Engagement in Education : Guidance for organisational change’. Coalition for Evidence-Based Education.

CESE (2014) ‘What Works Best: Evidence-based practice to help NSW student performance’. Sydney, NSW: Centre for Education Statistics and Evaluation.

CESE (2017) ‘Cognitive Load Theory: Research that teachers need to understand’. Sydney, NSW: Centre for Education Statistics and Evaluation

Coe, R, and S Kime. 2019. “A (New) Manifesto for Evidence-Based Education: Twenty Years On.” Sunderland, U.K.: Evidence-Based Education 

Coe, R. et al.(2014) What makes great teaching? Review of the underpinning research. London: Sutton Trust.

Deans for Impact (2015) ‘The Science of Learning’. Austin, TX: Deans for Impact.

Dunlosky, J. (2013) ‘Strengthening the student toolbox: Study strategies to boost Learning.’, American Educator. ERIC, 37(3), pp. 12–21.

IfEE (2019) ‘Engaging with Evidence’. York: Institute for Effective Education. 

Metz, A. & Louison, L. (2019) The Hexagon Tool: Exploring Context. Chapel Hill, NC: National Implementation Research Network, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill. Based on Kiser, Zabel, Zachik, & Smith (2007) and Blase, Kiser & Van Dyke (2013).

Nelson, J. and Campbell, C. (2017) ‘Evidence-informed practice in education: meanings and applications’, Educational researcher, 59(2), pp. 127–135. 

Rosenshine, B. (2012) ‘Principles of Instruction: Research based principles that all teachers should know’. Spring 2012: American Educator.

Stoll, al.(2018) ‘Evidence-Informed Teaching: Self-assessment tool for teachers’. London, U.K.: Chartered College of Teaching

Useful websites

Best Evidence in Brief Fortnightly newsletter which summarises some of the most recent educational research 

Best Evidence Encyclopaedia The Best Evidence Encyclopaedia is a web site created by the Johns Hopkins University School of Education and provides summaries of scientific reviews and is designed to give educators and researchers fair and useful information about the evidence supporting a variety of teaching approaches for school students

Campbell Collaboration The Campbell Collaboration promotes positive social and economic change through the production and use of systematic reviews and other evidence synthesis for evidence-based policy and practice – 38 of which have been produced for education

Chartered College of Teaching The professional association for teachers in England – provides a range of resources for teachers interested in research-use 

Deans for Impact A group of senior US teacher educators who are committed to the use research in teacher preparation and training

Education Endowment Foundation Guidance Reports Provides a range of evidence-based recommendations for how teachers can address a number of high priority issues

Education Endowment Foundation Teaching and Learning Toolkit  A summary of the international evidence on teaching and learning for 5 -16-year olds

 EPPI-Centre Based at the Institute of Education, University College London – the EPPI Centre is a specialist centre for the development and conduct of systematic reviews in social science

Evidence for Impact Provides teachers and school leaders with accessible information on which educational interventions have been shown to be 

Institute for Education Sciences  The Institute of Education Sciences (IES) is the statistics, research, and evaluation arm of the U.S. Department of Education, whose role is to provide scientific evidence on which to ground education practice and policy and to share this information in formats that are useful and accessible

 Research Schools Network A group of 32 schools in England – supported by the Education Endowment Foundation and the Institute of Effective Education – who support the use of evidence to improve teaching practice. 

Teacher Development Trust   Provides access to resources for teachers interested in research use and continuous professional development 

The Learning Scientists A US based group of cognitive scientists who We are cognitive psychological scientists are interested in the science of learning and who want to make scientific research on learning more accessible to students, teachers, and other educators.

What Works Clearinghouse Part of the IES – the What Works Clearinghouse – reviews educational and determine which studies meet rigorous standards, and summarize the findings, so as to question “what works in education

And remember

Just because a writer, text or organisation appears on the above lists, you still need to critically engage with what is said/written. You still need to ask: What is it? Where did I find it? Who has written/said this? When was this written/said? Why has this been written and/said?How do I know if it is of good quality? (Aveyard, Sharp, and Woolliams 2011)


Aveyard, H., Sharp, P. and Woolliams, M. (2011) A beginner’s guide to critical thinking and writing in health and social care. Maidenhead, Berkshire: McGraw-Hill Education (UK).

The school research lead, The Implementation Game and increasing your chance of successfully implementing an intervention

In last week’s blog we looked at school leaders could use the Hexagon Tool to help them better decisions as to whether a particular intervention is right for their school and setting.  In this week’s blog I’m going to look at what comes next – the implementation of the intervention – and how the work of Melanie Barwick and The Implementation Game (TIG) can increase your chance of actually bringing about improvements for your pupils and staff

Put simply The Implementation Game is basically a resource that helps you develop and implementation plan for whatever intervention you are looking to introduce.    Based on the research evidence from the field of implementation science – TIG is ‘played’ by the group of people who will be helping you develop the implementation of the of the intervention. In particular, it gets the implementation team to think about five different stages of implementation.

·     Preparing for practice change – choosing an innovation – and, for example,questions around your needs, desired outcomes, potential evidence-based practices which could achieve those outcomes

·     Preparing for practice change – readiness – whether the proposed innovation meets yours needs, is it a good fit, what changes will need to be made, what resources are available, what capacity is available to sustain the innovation, how will you obtain and maintain buy-in, how will you communicate the goal of the innovations

·     Implementation structure and organization – what partnerships will be required, what training will be required, what physical space will be needed, who will you maintain fidelity to both the implementation process and fidelity to the innovation, what technology will be needed, 

·     Ongoing implementation support - what staff training will be provided, what technical assistance and coaching will be made available, what data will you collect to evaluate process and outcomes, how will you go about learning how to improve your processes

·     Maintaining fidelity and sustaining – how will you maintain fidelity and quality over time  

In addition, TIG provides a range of other resources which helps you think through 

·     The different factors that might be relevant for your intervention – for example, the characteristics of the intervention, the outer setting and external factors, the inner setting and internal factors, characteristics of individuals involved and the process of engaging with them.

·     Implementation strategies – gather information, building buy-in, developing relationships, developing training materials, financial strategies and incentives, quality management

·     Implementation outcomes – for example acceptability, adoption, appropriateness, costs, feasibility, fidelity, 

A few observations

It seems to me that TIG is a useful tool that help you engage in a rigours process of planning the implementation of an intervention.  However, that does not mean that by using the tool this will guarantee success – that will depend upon many factors both your skills in both using the TIG and subsequently implementing the identified actions.  Indeed, one thing that I really like about the tool is right from the beginning it’s getting you to think about the sustainability of the intervention – and it’s not just about how can we implement an innovation – and then tick a box and say job done.  

And finally 

This will be my last blog of the academic year – and I intend to return will new resources and material at the end of August


Barwick M. (2018). The Implementation Game Worksheet. Toronto, ON The Hospital for Sick Children.

The school research lead, the hexagon tool and making good decisions about implementing interventions

As we approach the end of the academic year, you will no doubt be giving some thought to what new practices or interventions that you wish to adopt this coming September. Unfortunately, we know that once implemented many of these interventions will not live up to their initial promise – maybe the evidence supporting the intervention was not that robust and the intervention’s benefits were overstated– maybe their isn’t the external or internal expertise available to support the implementation of the intervention – maybe the intervention doesn’t fit with other processes and practices within the setting – maybe the intervention runs counter to the existing school culture and is met with resistance from some of the people who need to implement it.

However, it might be possible to increase your chances of making sure that you choose to implement an intervention – that not only appears to work in other settings but has a good chance to work in yours. One way of increasing your chances of a successfully implementing an intervention is to make sure that before the intervention is implemented is that you undertake some form of structured evaluation of both the intervention and your setting. To help you do this, I’m going to suggest that you have a look at something known as the Hexagon Tool – Metz and Louison (2019) – which will help you undertake a structured appraisal of: the research evidence to back claims for the interventions effectiveness; of whether there is a clear and usable intervention which can be adapted to the local context; the support available to help implement the intervention; whether the intervention meets the needs of your school/setting; whether the intervention is a good fit with other processes and practices within your school setting; whether your school/setting has the capacity to implement the intervention.

Figure 1 The Hexagon Tool

Screenshot 2019-07-12 at 14.41.01.png

Metz and Louison go onto provide guidance on when to use the tool – ideally at the early stages of decision-making process of whether to adopt the intervention. They also provide guidance as to how to use the tool – and the tasks which needed to be completed before the actual use of the tool – and what needs to be done as the tool is being used.

Of particular, use is they provide both a set of questions and associated rating scale to help you make judgements about each of the six elements. For example, for the ‘evidence’ component they pose the following questions.

1. Are there research data available to demonstrate the effectiveness (e.g. randomized trials, quasi-experimental designs) of the program or practice? If yes, provide citations or links to reports or publications.

2. What is the strength of the evidence? Under what conditions was the evidence developed?

3. What outcomes are expected when the program or practice is implemented as intended? How much of a change can be expected?

4. If research data are not available, are there evaluation data to indicate effectiveness (e.g. pre/post data, testing results, action research)? If yes, provide citations or links to evaluation reports.

5. Is there practice-based evidence or community-defined evidence to indicate effectiveness? If yes, provide citations or links.

6. Is there a well-developed theory of change or logic model that demonstrates how the program or practice is expected to contribute to short term and long term outcomes?

7. Do the studies (research and/or evaluation) provide data specific to the setting in which it will be implemented (e.g., has the program or practice been researched or evaluated in a similar context?)?

If yes, provide citations or links to evaluation reports.

8. Do the studies (research and/or evaluation) provide data specific to effectiveness for culturally and linguistically specific populations? If yes, provide citations or links specific to effectiveness for families or communities from diverse cultural groups.

Which they suggest you use to make a rating judgment – which is based on the following 5 point scale.

5 High Evidence

The program or practice has documented evidence of effectiveness based on at least two rigorous, external research studies with control groups, and has demonstrated sustained effects at least one year post treatment

4 Evidence

The program or practice has demonstrated effectiveness with one rigorous research study with a control group

3 Some Evidence

The program or practice shows some evidence of effectiveness through less rigorous research studies that include comparison groups

2 Minimal Evidence

The program or practice is guided by a well-developed theory of change or logic model, including clear inclusion and exclusion criteria for the target population, but has not demonstrated effectiveness through a research study

1 No Evidence

The program or practice does not have a well-developed logic model or theory of change and has not demonstrated effectiveness through a research study

A few observations

A framework such as the Hexagon Tool is extremely helpful in getting you to think about the different aspects of implementing an intervention. Not only that, it does so in way which should allow to summarise your evaluation in a way which is easily communicable to others, with the use of the rating scale and maybe the use of a ‘spider digram.’ However, before you can make good use of the tool – you are probably going to have to make a few adjustments to some of the detailed descriptions of each of the elements and the associated questions – so that they reflect your context and system, rather than US system in which the tool was devised. In addition, it’s important to remember that the Hexagon Tool does not provide a substitute for your professional judgment and you will still need to make a decision as to whether or not to proceed with the intervention.

And finally

Tools like the Hexagon Tool are extremely useful in helping you organise your thinking but they are not a substitute for thinking about the intervention and whether ‘what worked there’ might in the right circumstances ‘work here.’


Metz, A. & Louison, L. (2019) The Hexagon Tool: Exploring Context. Chapel Hill, NC: National Implementation Research Network, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill. Based on Kiser, Zabel, Zachik, & Smith (2007) and Blase, Kiser & Van Dyke (2013).

ResearchED and 300,000 words later - some reflections

The first ResearchED event I attended was in September 2014 and the London national conference. Without doubt, this was some of the most inspiring and influential professional development I had experienced in the thirty years I had been involved in education. It was inspiring because I was taking part in an event with over 1000 teachers who had given up a Saturday morning to speak and listen about something they cared about i.e. improving teaching and learning though the appropriate use of research evidence. It was influential, in that it got me thinking, reading and writing about evidence-based school leadership and management.

ResearchED London 2014 got me thinking about evidence-based school leadership and management for two reasons.  First, the vast majority of the sessions at the event had a focus on teaching and learning and little attention seemed being paid to the role of research and other sources of evidence in the decision-making of senior leaders in schools. Second, that summer I had by chance read an article by Adrian Furnham ]which introduced me to the discipline of evidence-based management and I was intrigued as to whether there was a possible synthesis with evidence-based education.  This contributed to me writing a book – Evidence-based School Leadership and Management: A practical guide– and 220  blogposts ( 

Having now written around 300,000 words on all things evidence-based, I would like to make the following observations about the current state of evidence-based practice within schools.   First, the ‘evidence-based movement’ is not going away anytime soon.  We have 22 schools in the Research Schools Network; an increasing number of schools appointing schools research leads; hundreds if not thousands of  educational bloggers contributing to discussions about how to improve education; social media and eduTwitter providing a forum for the articulation of views;  over 20 researchED conferences scheduled for 2019; the Education Endowment Foundation (EEF) spending in 2017-18 over £4m  to fund the delivery of 17 projects, involving 3620 schools and other educational settings reaching approximately 310,00  children and young people; and finally, we have Ofsted using research evidence to inform their inspection framework.

Nevertheless, despite all this time, effort and commitment being put into research and evidence-based practice, there is still much to be done to ensure evidence-based practice contributes to improved outcomes for pupils.  First, we need to have an honest conversation about teacher research literacy  and their subsequent abilities to make research informed changes in their practice.  Research undertaken by the National Foundation for Educational and the EEF suggests that teachers have a weak variable knowledge of the evidence-based relating to teaching and learning and have a particularly weak understanding of research requiring scientific or specialist knowledge, Nelson et al (2017).  Second, there is a distinction between the rhetoric and the reality of evidence-based practice within schools.  Research  undertaken for the Department for Education – Coldwell et al (2017) identified a number of schools where headteachers and senior leaders ‘talked a good game’ about evidence-informed teaching within their schools, whereas the reality was that research and evidence was not embedded within the day to day practice of the school.   Third, it’s important to be aware there is a major debate taking place amongst educational researchers about randomised controlled trials, effect sizes, meta-analysis. Indeed as Professor Rob Coe states: Ultimately, the best evidence we currently have may well be wrong; it is certainly likely to change. (Coe, 2018)

And finally, if I was to offer any advice to teachers, school leaders and governors/trustees who are interested in evidence-based practice, it would be the following. Becoming an evidence-based practitioner is hard-work. It doesn’t happen by just reading the latest EEF guidance document, John Hattie’s Visible Learning or by spending one Saturday morning a year at a researchED conference.  It requires a career long moral commitment to challenging both your own and others practice, critically examining ‘what works’ to ensure whatever actions you take bring about improvements in pupil outcomes.  

Recommendations for further reading 

Brown, C. (2015). Leading the Use of Research & Evidence in Schools. London. IOE Press

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page. 

Cain, T. (2019). Becoming a Research-Informed School: Why? What? How?London. Routledge.

Coe, R. (2018) What should we do about meta-analysis and effect size CEM Blog

Coldwell, M., Greany, T., Higgins, S., Brown, C., Maxwell, B., B, S., Stoll, L., Willis, B. and Burns, H. (2017). Evidence-Informed Teaching: An Evaluation of Progress in England Research Report. London. Department for Education

Furnham, A. (2014). On Your Head: A Magic Bullet for Motivating Staff. The Sunday Times. Sunday 13 July 2014. London

Jones, G. (2018). Evidence-Based School Leadership and Management: A Practical Guide. London. Sage Publishing.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

Nelson, J., Mehta, P., Sharples, J. and Davey, C. (2017). Measuring Teachers’ Research Engagement: Findings from a Pilot Study: Report and Executive Summary. London. Education Endowment Foundation/NFER

 This blogpost first appeared as an article in issue 4 of the researchED Magazine, which was published in June 2019

The Big Evidence Debate, effect sizes and meta-analysis - Are we just putting lipstick on a pig?

Tuesday 4 June saw a collection of ‘big name’ educational researchers – Nancy Cartwright, Rob Coe, Larry Hedges, Steve Higgins and Dylan Wiliam – coming together with teachers, researchers and knowledge brokers for the ‘big evidence debate’ about the extent to which the metrics from good quality experiments and meta-analyses can really help us improve education in practice and is meta-analysis the best we can do?

Now if you turned up for the ‘big evidence debate’ expecting academic ‘rattles’ and ‘dummies’ to be thrown out of the pram, then you were going to be disappointed, as it became quickly apparent that there was a broad consensus amongst the majority of the presenters, with this consensus – and I’ll come back to a minority view later – being summarised along the lines of:

• Most of the criticisms made of effect sizes by scholars such as Adrian Simpson – a notable absentee from those participating in the debate – have been known about by researchers for the best part of forty years.

• Many of the studies included in meta-analyses are of a low quality and more high quality and well-designed educational experiments are needed, as they form the bedrock of any meta-analysis or meta-meta analysis.

• Just because someone is a competent educational researcher does not make someone competent at undertaking a systematic review and associated meta-analysis.

• It’s incredibly difficult, if not impossible, for even the well-informed ‘lay-person’ to make any kind of judgement about the quality of a meta-analysis.

• There are too many avoidable mistakes being made when undertaking educational meta-analyses – for example - inappropriate comparisons; file-drawer problems; intervention quality; and variation in variability

• However, there are some problems in meta-analysis in education which are unavoidable; aptitude x treatment interactions; sensitivity to instruction; and the selection of studies.

• Nevertheless, regardless of how well they are done, we need to get better at communicating the findings arising from meta-analyses so that they are not subject to over-simplistic interpretations by policymakers, researchers, school leaders and teachers.

Unfortunately, there remains a nagging doubt – if we do all these things – better original research, better meta-analysis and better communication of the outcomes – then all that we may be doing – ‘is putting lipstick on a pig’. In other words, even if make all these changes and improvements in meta-analysis, they in themselves do not tell practitioners much, if anything, about what to do in their own context and setting. Indeed, Nancy Cartwright argued that whilst a randomised controlled trial may tell you something about ‘what worked’ there and a meta-analysis may tell you something about what worked in a number of places, they cannot tell you anything about whether ‘what worked there’ will ‘work here’ . She then goes onto use the image of randomised controlled trials and meta-analysis as being ‘like the small twigs in a bird’s nest. A heap of these twigs will not stay together in the wind. But they can be sculpted together is a tangle of leaves, grass, moss, mud, saliva, and feathers to build a secure nest.’ Cartwright, 2019 p13

As such, randomised controlled trials and meta-analyses should be a small proportion of educational research and should not be over-invested in. Instead, a whole range of activities should be engaged in, for example, case studies, process tracing, ethnographic studies, statistical analysis and the building of models.

Given the above what are the implications for teachers of the ‘big evidence debate’? Well if we synthesise the recommendations for teachers from both Dylan Wiliam and Nancy Cartwright, it’s possible to come up with six questions teachers and school leaders should ask when trying to use educational research to bring about improvement in schools.

1. Does this ‘intervention’ solve a problem we have?

2. Is our setting similar enough in ways that matter to other settings in which the intervention appears to have worked elsewhere?

3. What other information can we find – be it from other fields and disciplines outside of education, your own knowledge of your school and pupils – so you can derive your own causal model and theory of change of how the intervention could work?

4. What needs to be in place for the theory of change to work in our school?

5. How much improvement will we get? What might get in the way of the intervention so that good effects are negligible? Will other things happen to make the intervention redundant?

6. How much will it cost?


Further reading

Nancy Cartwright (2019): What is meant by “rigour” in evidence-based

educational policy and what’s so good about it?, Educational Research and Evaluation, DOI:


Steven Higgins (2018): Improving Learning: Meta-analysis of Intervention Research in Education. Cambridge University Press, Cambridge, UK

Adrian Simpson (2019): Separating arguments from conclusions: the mistaken

role of effect size in educational policy research, Educational Research and Evaluation, DOI:


Dylan Wiliam (2019): Some reflections on the role of evidence in improving

education, Educational Research and Evaluation, DOI: 10.1080/13803611.2019.1617993