Just this week, @DrSamSims wrote a very well-argued blogpost giving four reasons why instructional coaching is the most well-evidenced form of CPD, and which concludes that All schools that aspire to be evidence-based should be giving it (instructional coaching ) a go. Unfortunately, Sam’s last sentence could be viewed as being a tad overenthusiastic and for three reasons mars what is an otherwise excellent post. First, it’s not for researchers to tell school leaders and teachers what they should or should not be doing in their schools and classrooms . What school leaders and teachers prioritise in their schools and classroom is down to their professional judgment. Indeed, the last sentence of Sam’s blogpost is grist to the mill for the opponents of evidence-based education. Second, the role of research evidence in evidence-based practice is to provide the backing for warrants. As such, research evidence plays an indirect role in developing an arguments supporting the use of the intervention - Kvernbekk (2016). Third, even if you think that instructional coaching meets an obvious priority for your school i.e. supporting the improvement of teaching and learning – that does not automatically mean that your school should do it. Ideally, school leaders would use a disciplined process to work out whether what worked ‘there’ is going to work ‘here’.
So in the rest of this post I am, once again, going to lean on the work of Kvernbekk (2016) to examine the process you might wish to undertaken before adopting even the ‘most evidenced’ intervention.
Kvernbekk and making interventions work
Kvernbekk argues that if we want to make an intervention work it makes sense to pay attention to a wide variety of different types and sources of evidence, some of this evidence might be easily available and other pieces evidence may not. Kvernbekk then goes onto describe what she thinks you need to look at, though it does not have to be done in this order
Did the intervention in question, X – instructional coaching, work somewhere, i.e., did it play a positive causal role in achieving improved outcomes for at least some of the individuals in the study group (s). (Entitled to trust it if comes from reliable source, from say the Education Endowment Foundation)
Remember what you are after is the causal claim – the use of instructional coaching will improve pupil outcomes - not the quantitative evidence supporting the claim.
When looking at effect size - don’t forget – it is a statistical entity and only informs you of the aggregate result. A positive aggregate result is perfectly compatible with negative result for some of the individuals in the study group. In other words, instructional coaching may have beneficial outcomes for some pupils, but not all. Indeed, any calculation of effect-sizes is meta-analysis maybe influenced by publication bias in favour of research demonstrating positive outcomes.
Next, you have to look at your own local context. Which factors govern the default production of Y here. In other words, What are key factors in your school influencing pupil outcomes in your school?
You then need to consider whether the intervention can play the same causal role here as it did there. How did instructional coaching bring about improvements in pupil outcomes in other settings, and can that mechanism be replicated in your school. What were the support factors necessary for instructional coaching to work in other settings
Next you could consider whether the support factors necessary for the intervention to play a positive causal role in improving pupil learning are in place here, or whether we can get them. So what skills, knowledge, time, money, space, attention – are necessary to support the instructional coaching
You then could look at the concrete manifestations of the abstract principles or factors there to make sure you can find a feasible match here. For example, what time and resources are available instructional coaching. How many skilled instructional coaches are available.
You’ll then examine the system (context) here to see whether it is stable enough for the interventions to have time to unfold and work. You need to know the main factors influencing this stability and know how to maintain them. In other words, is the introduction and implementation of instructional coaching going to be ‘blown of course’ by other internal or external factors, for example, a re-organisation
Next you will appraise the permissibility of the intervention to make sure it does not violate any applicable norms, for example, are there confidentiality or informed consent issues
Now you’ll need to go on and consider the possible side effects and make a judgments as to whether any such might outweigh the expected benefit of achieving improved outcomes for some pupils.
Finally, you need to consider whether there are other interventions or ways of doing things which would also contribute to improved pupil outcomes that are preferable to instructional coaching.
Armed with all this information – you can then make an all-things considered judgment and say yes, instructional coaching will most likely work here.
Some additional comments
Of course, as Kvernbekk notes, there may be other factors that get in the way of the implementation of instructional coaching. You may also conclude that the cost of accessing and developing instructional coaches may be too expensive. In addition, using an off-the shelf scheme of instructional-coaching and faithfully trying to replicate it in your setting will probably not work. Instead what is required to be faithful to the basic principles of instructional coaching.
Discussion and Implications
So where does this leave us. First, in all likelihood instructional coaching is going to be relevant to your school but that does not automatically mean you should do it. Second, getting highly skilled instructional coaches directly working with teachers on improving teaching and learning is more likely to bring about improvements in pupil outcomes than forms of CPD which are less directly connected to pupil outcomes – though how surprising is that! Third, whatever form of CPD you adopt within your school – make sure there is a well thought through and articulated logic model for the intervention Knowlton and Phillips (2013). Finally, if we can’t make the process of experienced and skilled teachers directly supporting other colleagues bring about improved pupil outcomes work – then we have as profession we have a huge problem.
In future posts I’ll be looking at how Kvernbekk uses Toulmin’s structure of arguments to how research evidence plays only an indirect role in decisions arising from evidence-based practice.
Knowlton, L. M. and Phillips, C. (2013). The Logic Model Guidebook: Better Strategies for Great Results (Second Edition). San Franciso, CA. SAGE.
Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge. for