The school research lead and making the most of supporting evidence-based practice in schools

School research leads across country are trying to encourage the use of evidence-based practice. No doubts lots of different interventions, be it lesson study, joint-practice development, journal clubs, conferences, seminars and disciplined inquiry - have been introduced. Alternatively the school may be involved in research studies looking at ways of developing evidence use,  Hammersley-Fletcher, Lewin, et al. (2015), Griggs, Speight, et al. (2016), Speight, Callanan, et al. (2016) and  Brown (2017).  So to make the most of all this activity, and to ensure that colleagues learn from both the success and failure of others, it is sensible to use a basic common structure to report on educational interventions designed to support evidence-informed/based practice within schools.

The GREET check-list

The GREET check-list  - Phillips, Lewis, McEvoy, Galipeau, Glasziou, Moher, et al. (2016) was developed to provide guidance n the reporting of educational interventions for evidence-based practice within medicine.   The check-list was the product of a systematic review, Delphi survey and three consensus discussions, with the result being a 17 item check-list . Guidance on how to complete the GREET checklist has been provided by Phillips, Lewis, McEvoy, Galipeau, Glasziou, Hammick, et al. (2016) and this guidance has been used to develop an exemplar report of an evidence-based educational intervention – journal clubs.

Journal Clubs

1. INTERVENTION: Provide a brief description of the educational intervention for all groups involved [e.g. control and comparator(s)].

The introduction of a journal club – facilitated by the school research lead - for teaching and other staff who wished to attend the sessions.

2. THEORY: Describe the educational theory(ies), concept or approach used in the intervention.

If teachers are ‘exposed’ to research this will ultimately bring about changes in teaching practice, resulting in improved learning outcomes for pupils

3. LEARNING OBJECTIVES: Describe the learning objectives for all groups involved in the educational intervention.

  • To develop the reading habits of the participants

  • To improve participants knowledge of relevant educational research

  • To help develop participants skills in critically appraising research and applying it to teaching

4. EBP CONTENT: List the foundation steps of EBP (ask, acquire, appraise, apply, assess) included in the educational intervention.

The core content focused on appraising educational research

5. MATERIALS: Describe the specific educational materials used in the educational intervention. Include materials provided to the learners and those used in the training of educational intervention providers.

Attendees were directed towards Chartered College of Teaching resources designed to give brief summaries of different types of research reports, including how to go about reading research.

6. EDUCATIONAL STRATEGIES: Describe the teaching / learning strategies (e.g. tutorials, lectures, online modules) used in the educational intervention.

Seminars led by the school research lead

7. INCENTIVES: Describe any incentives or reimbursements provided to the learners.

A group of ten staff – out of a possible 100 eligible staff –decided to attend the journal club. 

Attendees to the sessions were provided with light refreshments – tea, coffee and biscuits

8. INSTRUCTORS: For each instructor(s) involved in the educational intervention describe their professional discipline, teaching experience / expertise. Include any specific training related to the educational intervention provided for the instructor(s).

The sessions were facilitated by the school research lead who had recently completed a MA in Education.

9. DELIVERY: Describe the modes of delivery (e.g. face-to-face, internet or independent study package) of the educational intervention. Include whether the intervention was provided individually or in a group and the ratio of learners to instructors.

In October 2018 school research lead conducted an introductory session on how to appraise educational research.  In subsequent sessions the school research lead facilitated a structured discussion on the reading scheduled for that session.

10. ENVIRONMENT: Describe the relevant physical learning spaces (e.g. conference, university lecture theatre, hospital ward, community) where the teaching / learning occurred. 

The sessions were held in the seminar room – located within the school library. 

11. SCHEDULE: Describe the scheduling of the educational intervention including the number of sessions, their frequency, timing and duration

A total of six sessions were held, with a session being held every half-term.  Each session took place on a Wednesday and 4.00 pm and lasted approximately 45 minutes.  The intervention was implemented over the course of 2018-19 academic year.

12. Describe the amount of time learners spent in face to face contact with instructors and any designated time spent in self-directed learning activities.

Participants spent approximately 4 ½ hours in the sessions, with another 4 ½ hours spent reading materials prior to the sessions. 

13. Did the educational intervention require specific adaptation for the learners? If yes, please describe the adaptations made for the learner(s) or group(s).

Some participants had little or no knowledge of educational research and they were paired with other participants who had recently participated in post-graduate study.

14. Was the educational intervention modified during the course of the study? If yes, describe the changes (what, why, when, and how).

It had been intended to look at a key text, for example,  during each session.  It soon became apparent that participants were able to do the required reading.  Texts subsequently used were primarily used were articles from the Chartered College of Teaching journal Impact.

15. ATTENDANCE: Describe the learner attendance, including how this was assessed and by whom. Describe any strategies that were used to facilitate attendance.

On average only six out of ten staff attended the sessions.  Two participants attended all six sessions – with two participants only attending two sessions.  Records of attendances were kept by the school research lead.

16. Describe any processes used to determine whether the materials  and the educational strategies used in the educational intervention were delivered as originally planned.

The school research lead undertook research into how journal clubs had been successfully run in both medicine and schools based and devised the session based on this reading

17. Describe the extent to which the number of sessions, their frequency, timing and duration for the educational intervention was delivered as scheduled

All the sessions were delivered as scheduled


Whilst in medicine there is some consensus on the competences associated with evidence-based practice - Dawes, Summerskill, et al. (2005) – this does not appear to be the case in education.  As such the check-list may or may not be relevant to education. And of course, it provides only the sketchiest of outlines of how the implementation was Implemented and no data of the impact of the intervention on pupils outcomes. Nevertheless, the check-list does provide a time efficient way of capturing the essence of what was done, and we should never the perfect be the enemy of the good.

And finally

If you are interested on the use of check-lists may I suggest you have look at the work of both Atul Gawande and Harry Fletcher-Wood. (see for references)






Brown, C. (2017). Research Learning Communities: How the Rlc Approach Enables Teachers to Use Research to Improve Their Practice and the Benefits for Students That Occur as a Result. Research for All. 1. 2. 387-405.

Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A. and Osborne, J. (2005). Sicily Statement on Evidence-Based Practice. BMC medical education. 5. 1. 1.

Griggs, J., Speight, S. and Farias, J. C. (2016). Ashford Teaching Alliance Research Champion: Evaluation Report and Executive Summary. Education Endowment Foundation.

Hammersley-Fletcher, L., Lewin, C., Davies, C., Duggan, J., Rowley, H. and Spink, E. (2015). Evidence-Based Teaching: Advancing Capability and Capacity for Enquiry in Schools: Interim Report. London. National College for Teaching and Leadership.

Phillips, A., Lewis, L., McEvoy, M., Galipeau, J., Glasziou, P., Hammick, M., Moher, D., Tilson, J. and Williams, M. (2016). Explanation and Elaboration Paper (E&E) for the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (Greet).. University of South Australia

Phillips, A. C., Lewis, L. K., McEvoy, M. P., Galipeau, J., Glasziou, P., Moher, D., Tilson, J. K. and Williams, M. T. (2016). Development and Validation of the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (Greet). BMC medical education. 16. 1. 237.

Speight, S., Callanan, M., Griggs, J., Farias, J. C. and Fry, A. (2016). Rochdale Research into Practice: Evaluation Report and Executive Summary. Education Endowment Foundation.


The school research lead and being a bit TIDiER - making school inquiries more rigorous and useful

Over the last seven days several  research related articles in the TES have caught my eye.  First, there was Joe Nutt saying - 'Good research is good - but experience is better' with research often so indigestible as to be of little use to teachers. Then, there was an article by  Martin George asking whether ‘edtech’ is immune from rigorous research, given that  pace of technological change makes the usual evidence-gathering on effectiveness redundant. Finally, we have Professor Barbara Oakley saying that too many education researchers ‘do not do research that is founded on the scientific method,’ resulting in a crisis of replicability.  In other words, when teachers and school leaders are wishing to use to research evidence, the evidence doesn’t exist, or if it does, it’s neither comprehensible or replicable

Now it’s fair to say that there are no simple or easy answers to the questions these articles raise.  However, at the level of the school when teachers report on a disciplined inquiry or some form of collaborative practitioner inquiry, there is something which can do  i.e. use a reporting checklist - to improve the quality of reporting and in doing so make the research more accessible and useful to both themselves and colleagues.

One such checklist is the TIDieR (Template for Intervention Description and Replication) Checklist - Hoffmann, Glasziou, et al. (2014) – I’ve adapted to report on an school-based intervention which provides one-to one support for pupils studying GCSE English.

TIDiER Checklist – One to one support for pupils studying GCSE English

1.     NAME – Provide the name or the phrase which describes the intervention.

  • Additional one-to-one support for pupils studying GCSE English.

2.     WHY – Describe any rationale, theory or goals of the essential elements of the interventions

  • The provision of additional support  may lead to an improvement in individuals performance in GCSE English examinations, with more pupils gaining grade 4 or better.

3.     WHO – Describe the participants and how they were selected for the intervention

  • The participants were Y11 pupils in a mixed sex comprehensive school, where examination results are consistent with national averages. and will below average numbers of pupils receiving the pupil premium.

  • Twenty pupils - out of a total of 150 pupils studying GCSE English - were identified by English teachers as being on grade 5/4 borderline for GCSE English were asked to attend the activities associated with the intervention.  The twenty pupil included 12 boys and 8 girls.

4.     WHAT - Materials: Describe any physical or informational materials used in the intervention, including those provided to participants or used in intervention delivery or in training of intervention providers. Provide information on where the materials can be accessed (e.g. online appendix, URL).

  • Existing teaching resources were used – with teachers pooling resources .  Additional resources  were also created to respond to specific teaching problems as they emerged.  These were also shared.

5.     WHAT : Procedures: Describe each of the procedures or activities, and are processes used in the interventions including any enabling or support activities.

  • The night before their scheduled session all pupils involved in the intervention received a text message reminding them of the time and place of their support session.

6.     WHO PROVIDED: For each category of intervention provider – teachers, pastoral support, teaching assistant etc describe their expertise, backgrounds and any specific training given.

  • All teachers (five) within the English Department  were used to providing the one-to-one support to pupils.  No additional raining was provided.

7.     HOW: Describe the mode of delivery of the intervention – large group teaching, small group teaching, one to one, online etc.

  • Additional support was provided on an one to one to basis to individual pupils

8.     WHERE: Describe the location where it occurred, any necessary infrastructure or other features

  • The sessions were provided each day (Monday – Thursday) after school between 4.00 pm and 4.45 pm and held individual teacher’s base rooms.

9.     WHEN and HOW MUCH – Describe the number of times the intervention was delivered and over what period of time including the number of sessions, their schedule, and their duration

  • Each pupil was scheduled to receive 10 sessions – spread over 12 weeks, commencing in February and ending the middle of May.  Each session was expected to last 45 minutes.  

10.  TAILORING Was the intervention planned to personalised or adapted for the needs of a particular group – if so, then describe what, why, when and how.

  • Those pupils allocated to the programme were provided with a personalised programme of work – which was devised after discussion between the pupil, the class teacher and the teacher providing additional support. 

11.  MODIFICATIONS If the intervention was modified during the course of the study, describe the changes – what, why, how and when.

  • Due to staff absence – 1 member of staff was absent for the period of the intervention – those sessions were delivered by a teaching assistant

12.  HOW WELL : Planned : If intervention adherence or fidelity was assessed , describe how and how and were any strategies used to maintain/develop fidelity.

  • It was hoped that pupils would attend on average 8 sessions.  Where sessions were missed, emails were sent to the both the GCSE English teacher and group tutor to ask them to remind pupils to attend future sessions.  Where teachers were not available to take the planned sessions, support was provided by a teaching assistant

13.  HOW WELL : Actual : If intervention adherence or fidelity was assessed , describe the extent to which the intervention was delivered as planned

  • The mean number of sessions attended was 7.  Seven pupils (35%) attended ten sessions, six pupils attended nine sessions (30%) with three pupils attending one or less sessions.  The remaining four pupils attended between six and eight sessions.

14.  OUTCOMES : Actual : What outcomes were obtained

  •  19  out of 20 pupils gained at least a grade 4 in GCSE English  

15.  DISCUSSION : What has been learnt and is relevant internally and externally

  • The provision appeared to have made an impact – as in the previous academic year only 50% of a similar group pupils gained at least a grade 4

  • Each member of staff involved had to commit around 30 hours of additional time to support the innovation and was only possible due to their commitment

  • Other activities – which could have taken place in after school meetings had to be delayed till later in the year.

  • All the staff involved were experienced and effective practitioners – the model may need to be adjusted for a different profile of staff

  • Consideration needs to be given whether small group support should be provided for pupils


Of course, when you use a checklist there are drawbacks. Although the check-list might will help you to report on the intervention, it still might not capture all the complexity of what has happened. By adopting a check-list may lead a reduction in creativity in the ways in which teachers report on interventions. Adopting a check-list may also be perceived as increasing the workload of teachers  

And finally

There are no easy answers when it comes to addressing some of the problems with using research evidence.  That said, regardless of whether you are someone who is producing research evidence or using it to help bring out about improvements for pupils, if you are conscientious, judicious and explicit in your use of evidence, you will not go far wrong.


Hoffmann, T. C., Glasziou, P. P., Boutron, I., Milne, R., Perera, R., Moher, D., Altman, D. G., Barbour, V., Macdonald, H., Johnston, M., Lamb, S. E., Dixon-Woods, M., McCulloch, P., Wyatt, J. C., Chan, A.-W. and Michie, S. (2014). Better Reporting of Interventions: Template for Intervention Description and Replication (Tidier) Checklist and Guide. BMJ : British Medical Journal. 348.

Humphrey, N., Lendrum, A., Ashworth, E., Frearson, K., Buck, R. and Kerr, K. (2016). Implementation and Process Evaluation (Ipe) for Interventions in Education Settings: A Synthesis of the Literature. Education Endowment Foundation, London.

The school research lead and PDSA cycles - what's the evidence .

One of the challenges in commentating about evidence-based practice, particularly if you make suggestions as to how to tackle a particular issue, is making sure you have ‘research evidence’ sitting behind whatever advice you may be giving.  In my recently published book – Evidence-Based School Leadership: A practical guide – I suggest that when acting on evidence – it makes sense to use a succession of Plan-Do-Study-Act (PDSA) cycles.  As such,  I was delighted when I came across some research by  (Tichnor-Wagner et al., 2017) which examines how educators in the US  responded to the use of PDSA cycles.  So the rest of the post will:

·      Briefly describe the characteristics of a PDSA cycle.

·      Review Tichnor-Wagner et al’s research on PDSA cycles.

·      Consider the implications for evidence-based practitioners – of whatever level – within schools

The PDSA Cycle

PDSA cycles have their origins in both the quality assurance and improvement science- (Deming, 1993)  (Langley et al., 2009) and (Bryk et al., 2015).  Put simply, a PDSA Cycle is a tool for planning, implementing, refining and improving an intervention or change, and is designed to help you answer three questions:

  • What are you trying to accomplish?

  • How will we know whether the change is an improvement?

  • What changes can we make that will result in improvement? (Langley et al., 2009)

There are four steps which are designed to be carried out repeatedly to help answer new questions as the intervention unfolds and develops:

·      Plan a small intervention— or small test of change—to learn, making predictions about·  the outcome of the intervention; 

·      Do - doing or executing it in practice - Implementing the change as planned. Collect data and document problems alongside unexpected observations. You also begin analysing the data in this stage.

·      Study - complete analysis of data and compare it with the predictions and expected outcomes. What are the lessons? Were there any unintended consequences, surprises, successes or failures

·      Act – reflecting and acting upon what was learnt in the first three phases

Screen Shot 2019-01-26 at 20.34.54.png

With this process being repeated as many times as required.

Tichnor-Wagner et al’s research on PDSA cycles

Tichnor-Wagner er al drew upon a multi-year research project and involved a comparative case study of innovation design team in two large, urban school districts engaged in improvement work In the study innovation design teams were introduced to PDSA cycles to help them; first, further develop, refine, implement, and scale the designed components of an innovation, and second, to build the capacity of schools and the district to engage in continuous improvement for innovations they might implemented in the future.  Data was collected through the use of semi- interviews with members of the innovation design teams who participated in PDSA training and implementation, surveys of participants after training as well as  documents and field notes from observation of those trainings.  Data analysis involved a first round of descriptive coding that captured examples of the local context  and learning opportunities.  From the first round of descriptive coding a subset of codes emerged , which were then used for a second round of coding. Processes were put in place in ensure the inter-reliability of the coding undertaken by researchers – 90% reliability.  Subsequent, in depth-memos were produced for each of the school districts.  Addtional analysis then examined themes related to the will and capacity of innovation design team members to implement PDSA 


In both districts, participants’ perceptions of PDSA revealed that school and district practitioners saw value in PDSA. 

The PDSA cycles built on work that participants were already doing, suggesting that PDSA may be an incremental rather than radical change to current school practices.

However, although participants thought  PDSAs were similar to what they already do, they  also felt the activity disconnected from their daily work.

Practitioners valued both the PDSA and the innovations they were testing through PDSA, they resisted the specific (researcher) forms they had to fill out for each phase and the scheduling of when the cycles would take place, which caused frustration about PDSA

There  were problems in finding time and expertise for PDSA work— which indicates that additional resources may need to be made available to support innovation and development. 

Implications for evidence-based practitioners

Drawing on Tichnor-Wagner discussion of the findings – the following may be useful for you to consider when using PDSA

There is a lot to be said for implementing intervention, which practitioners see value in and this will contribute to the motivation to engage and use the intervention.  With that in mind, I’d recommend having a look at the work of (Michie et al., 2011) and the role of motivation in their Behaviour Change Wheel.

Even though the PDSA cycle or version of it may be familiar to colleagues within your school, don’t’ assume there is the capacity or capability to make best use of the innovation.  This suggests, that you need to ensure you have a sense of colleagues’ current levels of expertise, prior to implementation.

If you are trying to get colleagues to PDSA cycles, try and make sure they replace some other activity, and become of day to day work of the school, rather than something that is ‘bolted on’.   This also applies to anyone providing training on the use of PDSAs

If you decide to use  disciplined inquiry incorporating PDSAa - to replace some element of your existing performance management (PM) processes, the problems with PM won’t disappear, they’ll in all likelihood just change.

You might want to give consideration as to whether PDSA cycle’s are applicable to all forms of intervention within your school.  For example, how useful is the PDSA cycle if you are looking to develop higher levels of trust within your school.

And finally 

It’s always worth keeping your eye out for research that makes you think about things, you take for granted or think are relatively obvious.  Even relatively simple ideas may need high levels of support to be implemented well.   


Bryk AS, Gomez LM, Grunow A, et al. (2015) Learning to improve: How America's schools can get better at getting better, Cambirdge, MA: Harvard Education Press.

Deming W. (1993) The new economics for industry, education, government. Cambridge, MA: Massachusetts Institute of Technology. Center for Advanced Engineering Study.

Langley GJ, Moen R, Nolan KM, et al. (2009) The improvement guide: a practical approach to enhancing organizational performance, San Francisco: John Wiley & Sons.

Michie S, Van Stralen MM and West R. (2011) The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science 6: 42.

Tichnor-Wagner A, Wachen J, Cannata M, et al. (2017) Continuous improvement in the public school context: Understanding how educators respond to plan–do–study–act cycles. Journal of educational change 18: 465-494.


Can the field of bio-ethics help us think about the ethics of teacher-led research and evidence-based practice in schools?

In previous blogs I have tried to get some kind of grip on the ethical issues associated with both school-led research and evidence-based/informed practice within schools. This has led me to doing some reading on ‘bio-ethics’ and the general ethical principles used in the caring professions to see if they can shed some light on how to conduct ethical evidence-based practice.  Recently, I read an article by Hersch (2018) – who borrows two concepts from bioethics - clinical equipoise and therapeutic misconception -  and applies them to ‘research’ conducted by teacher researchers.  So the rest of this post will in explore in more detail what is meant by the terms educational misconception and educational equipoise, and will then go onto examine the implications for teachers and schools.

Educational misconception

In the context of medicine, therapeutic misconception exists where a research subject/patient has the mistaken belief that decisions about his or her care are based solely on what is best for patient/subject. So a patient may take part in a clinical trial, be in the control group receiving a placebo and think they are receiving the best possible care.  

This type of misconception is not limited to medicine, it can also happen in teacher and school-led research.  As Hersch argues once a teacher decides to evaluate a teaching strategy or intervention they add an extra aim to their classroom – other than teaching pupils to learn to the best of the teacher’s abilities.  This additional aim involves finding out what teaching methods work best to accomplish this.  As such, the beneficiaries of the research may not be current pupils, but rather the teacher themselves, or future pupils taught by this teacher.

Hersch goes onto argue that educational misconception is a problem and states: Considering the seriousness with which research on human subjects is conducted, and the importance placed on voluntary participation with easy opt out and informed consent, it would be a significant ethical failing if students were under the misconception that their teachers only have their learning in mind in the classroom.  Of course, we are not dealing with life-and-death issues, nor even with serious consequences in students’ health, either physical or mental.  Yet this does not detract from the gravity of letting students continue with their studies under an unnecessary misconception (p8)

Educational equipoise

In the context of medicine, clinical equipoise arises where there is an honest professional disagreement amongst experts about the most appropriate treatments. In other words, there is no consensus about the pluses and minuses of the various treatments.  As a result a clinical trial may be designed to resolve, if possible, the disagreement.

This notion of equipoise has found its way into education, for example, Coe, Kime, et al. (2013) - in the EEF’s DIY Evaluation Guide – state

Ethical evaluations start from a position of 'equipoise', where we do not know for certain what works best, it can be argued that that it is unethical not to try and establish which is more effective, particularly if the intervention is going to be repeated or rolled-out more widely…..It is important to explain that an evaluation of a new possible method is taking place, not an evaluation of a better method as if we knew it was better, we’d be doing it already.

Hersch then goes onto identify three difficulties in translating the notion of clinical equipoise to educational research.  First, teaching lacks an ethical framework which goes ‘beyond the ethical requirements for people in general’. Indeed, one of the ironies of teaching is that if a teacher were to introduce a new approach and did not rigorously evaluate it, this is within the bounds of acceptable teaching practice.  On the other hand, a teacher who sets out to rigorously evaluate their practice is probably going to be held to a higher ethical standard.  

Second, the chances for harm may be smaller in an educational context than in a clinical context, but that does not mean the potential for pupils being harmed should be ignored.

As Hersch states: At worst, students learn a little less than they could, so is that really a worry? But it is a worry when considering the ethics of SoLT (scholarship of learning and teaching).  Just because there are worse harms to which people can be subjected in other contexts, and just because some teachers do a poor job teaching does not mean that those who seek to research their teaching need not care about the harms they inflict, minor as they might be.

Third, Hersch raises the point about who the community of experts is, particularly when there are disagreements about the success or otherwise of an intervention.  Is it researchers who work in higher education institutions or other bodies?  Is it the professional bodies or associations of teachers? Are the experts within the senior leadership teams of a school?  Is it the subject experts who work in a department within a school? It is, perhaps, another indication of how we are from being a research-informed profession, that such seemingly simple questions are so difficult to answer.


So how can schools and teachers meet some of these challenges?

·      Teachers should consider making it clear to pupils participating in a study, what the aims  of the study are and who are the intended beneficiaries.

·      Teachers have an ethical obligation to keep up with the latest research, and in particular where there appears to be an emerging consensus, so as to make sure they are using the most effective interventions – otherwise, there may be an issue around educational equipoise

·      The school research lead has an important role to play as a potential gatekeeper as to ‘what works, for whom, to what extent, where, and for how long’

And finally

Perhaps, before thinking about undertaking teacher-led or school-led research practitioner inquiry, time could be spent thinking about the ethical values which inform the work of the school.  If this is done, so many other things might become that much easier to manage.


Coe, R., Kime, S., Nevill, C. and Coleman, R. (2013). The Diy Evaluation Guide. London. Foundation, E. E.

Hersch, G. (2018). Educational Equipoise and the Educational Misconception; Lessons from Bioethics. Teaching & Learning Inquiry. 6. 2. 3-15.

How to stop doing things or how not to start doing things in the first place - and the APEASE framework

In a recent article in the TES – Keziah Featherstone @keziah70 – advocates  … stopping as many things as you can this term and not introducing new things, even if they seem like a really good idea …. focus on the core business of teaching and learning and making that as easy and effective as you can. Do not exhaust yourself, your students or your staff by trying something new and glittery. Just because it works somewhere else, or in a book, doesn’t mean it’ll work for you. 

Personally – I’m not sure you can totally stop doing new things – though what you can do is make sure that any idea, innovation or intervention is  appropriately evaluated before being introduced.  That said, it is necessary that ensure that existing practices are subject to review – particularly if it is not clear whether it has a positive impact on pupil learning.  To help you do this I’m going to suggest that you use a simple set of six criteria (APEASE) - developed by Michie, Atkins, et al. (2014) -  when designing or evaluating a new intervention. The APEASE criteria are detailed In Table 1 –  with descriptions being amended to how the criteria can also be used to evaluate existing practices.

Table 1. The APEASE criteria for designing and evaluating interventions


An innovation is affordable if within the relevant budget it can be accessed by all those pupils for whom it might benefit of relevance or benefit.  For existing practices, if you weren’t already doing it, would you be able find the money. And id you could, would you want to use it for this purpose?


Can the intervention be implemented by effective teachers as part of their day to day practice.  Or does the intervention require the ‘best’ staff to be highly trained and be supported by extensive resources.  Are current practices ‘soaking’ up extensive specialist resources

Effectiveness and cost-effectiveness

What will be the effect size of the intervention when implemented under normal day to day conditions in the school/  How much will it cost per pupil? How much will it cost per pupil who benefits from the intervention?  You may want to have at this post on the Number Needed to Teach to help you with this judgment. 

For existing practices have you any notion of the cost per pupil or cost per pupils who benefits from the practice?


Acceptability refers to the extent to which an intervention is judged to be appropriate by relevant stakeholders.  What is acceptable to teachers may not be acceptable by parents. What is acceptable by the senior leadership team may not be acceptable by the school’s governing body or trustees. 

Are there some existing practices, which at best, are only marginally acceptable to stakeholders. Can these practices be stopped without upsetting key stakeholders?


An intervention may be effective and practicable, but have unwanted side-effects or unintended consequences. These need to be considered when deciding whether or not to proceed – as these are often overlooked.  You might want to have a look a look the work of Zhao (2018) who explores the issue of side-effects in education.

Are you aware of the negative side effects of an existing practice, if so, are these implicit and just taken for granted or how have they been identified and articulated.


An important consideration is the extent to which an intervention may reduce or increase the disparities between different groups of pupils. For example, Hill, Mellon, et al. (2016) demonstrates the negative impact on Y7, Y8 and Y9 pupils of a ‘superhead’ concentrating resources on Y10 and Y11 pupils.

And finally

No set of criteria can you give the answer as to whether to either introduce a new innovation or bring to a halt existing practice.  All they can do, is hopefully, help you increase your chance of making decisions which lead to favourable outcomes for your pupils. 


Hill, A., Mellon, L., Laker, B. and Goddard, J. (2016). The One Type of Leader Who Can Turn around a Failing School. Harvard business review. 20.

Michie, S., Atkins, L. and West, R. (2014). The Behaviour Change Wheel. A guide to designing interventions. 1st ed. Great Britain: Silverback Publishing.

Zhao, Y. (2018). What Works May Hurt—Side Effects in Education. New York. Teachers College Press.