As we approach the end of the academic year, you will no doubt be giving some thought to what new practices or interventions that you wish to adopt this coming September. Unfortunately, we know that once implemented many of these interventions will not live up to their initial promise – maybe the evidence supporting the intervention was not that robust and the intervention’s benefits were overstated– maybe their isn’t the external or internal expertise available to support the implementation of the intervention – maybe the intervention doesn’t fit with other processes and practices within the setting – maybe the intervention runs counter to the existing school culture and is met with resistance from some of the people who need to implement it.
However, it might be possible to increase your chances of making sure that you choose to implement an intervention – that not only appears to work in other settings but has a good chance to work in yours. One way of increasing your chances of a successfully implementing an intervention is to make sure that before the intervention is implemented is that you undertake some form of structured evaluation of both the intervention and your setting. To help you do this, I’m going to suggest that you have a look at something known as the Hexagon Tool – Metz and Louison (2019) – which will help you undertake a structured appraisal of: the research evidence to back claims for the interventions effectiveness; of whether there is a clear and usable intervention which can be adapted to the local context; the support available to help implement the intervention; whether the intervention meets the needs of your school/setting; whether the intervention is a good fit with other processes and practices within your school setting; whether your school/setting has the capacity to implement the intervention.
Figure 1 The Hexagon Tool
Metz and Louison go onto provide guidance on when to use the tool – ideally at the early stages of decision-making process of whether to adopt the intervention. They also provide guidance as to how to use the tool – and the tasks which needed to be completed before the actual use of the tool – and what needs to be done as the tool is being used.
Of particular, use is they provide both a set of questions and associated rating scale to help you make judgements about each of the six elements. For example, for the ‘evidence’ component they pose the following questions.
1. Are there research data available to demonstrate the effectiveness (e.g. randomized trials, quasi-experimental designs) of the program or practice? If yes, provide citations or links to reports or publications.
2. What is the strength of the evidence? Under what conditions was the evidence developed?
3. What outcomes are expected when the program or practice is implemented as intended? How much of a change can be expected?
4. If research data are not available, are there evaluation data to indicate effectiveness (e.g. pre/post data, testing results, action research)? If yes, provide citations or links to evaluation reports.
5. Is there practice-based evidence or community-defined evidence to indicate effectiveness? If yes, provide citations or links.
6. Is there a well-developed theory of change or logic model that demonstrates how the program or practice is expected to contribute to short term and long term outcomes?
7. Do the studies (research and/or evaluation) provide data specific to the setting in which it will be implemented (e.g., has the program or practice been researched or evaluated in a similar context?)?
If yes, provide citations or links to evaluation reports.
8. Do the studies (research and/or evaluation) provide data specific to effectiveness for culturally and linguistically specific populations? If yes, provide citations or links specific to effectiveness for families or communities from diverse cultural groups.
Which they suggest you use to make a rating judgment – which is based on the following 5 point scale.
5 High Evidence
The program or practice has documented evidence of effectiveness based on at least two rigorous, external research studies with control groups, and has demonstrated sustained effects at least one year post treatment
The program or practice has demonstrated effectiveness with one rigorous research study with a control group
3 Some Evidence
The program or practice shows some evidence of effectiveness through less rigorous research studies that include comparison groups
2 Minimal Evidence
The program or practice is guided by a well-developed theory of change or logic model, including clear inclusion and exclusion criteria for the target population, but has not demonstrated effectiveness through a research study
1 No Evidence
The program or practice does not have a well-developed logic model or theory of change and has not demonstrated effectiveness through a research study
A few observations
A framework such as the Hexagon Tool is extremely helpful in getting you to think about the different aspects of implementing an intervention. Not only that, it does so in way which should allow to summarise your evaluation in a way which is easily communicable to others, with the use of the rating scale and maybe the use of a ‘spider digram.’ However, before you can make good use of the tool – you are probably going to have to make a few adjustments to some of the detailed descriptions of each of the elements and the associated questions – so that they reflect your context and system, rather than US system in which the tool was devised. In addition, it’s important to remember that the Hexagon Tool does not provide a substitute for your professional judgment and you will still need to make a decision as to whether or not to proceed with the intervention.
Tools like the Hexagon Tool are extremely useful in helping you organise your thinking but they are not a substitute for thinking about the intervention and whether ‘what worked there’ might in the right circumstances ‘work here.’
Metz, A. & Louison, L. (2019) The Hexagon Tool: Exploring Context. Chapel Hill, NC: National Implementation Research Network, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill. Based on Kiser, Zabel, Zachik, & Smith (2007) and Blase, Kiser & Van Dyke (2013).