One of the benefits of the Christmas and New Year’s break is that it gives you the opportunity to do some reading and then have the time to give some thought to have what you have just read. Over the Christmas break I have been reading Professor Steve Higgins’s fascinating new book Improving Learning: Meta-analysis of Intervention Research in Education in which I came across a statistic which isn’t often seen in educational texts – the Number Needed to Treat (NNT) and renamed for this post as the Number Needed to Teach (NNTCH) – which for a given intervention shows how many pupils need to be ‘taught’ in the intervention group for there to be one more favourable outcome compared to the control group.
In thinking about the NNTCH it soon became apparent that is potentially quite useful as it can help you quantify how many pupils might benefit from the intervention – which is particularly important if are trying to make a case for the use for the introduction of the intervention – as it will make it easier to demonstrate to colleagues the potential impact of the intervention. The NNTCH allows you to support your argument for the intervention with reference to the number of pupils who may benefit rather than referring to some abstract statistic such as effect size . That said, the NNTCH is not without its problems, which I will discuss later. So to help you make the most of the NNTCH in your decision-making the rest of this post will:
Explain how to calculate the NNTCH
Look at the relationship between effect sizes and NNTCH
Examine how the NNTCH can help you calculate the average cost of pupils benefiting from an intervention.
Make some tentative observations about the usefulness of the NNTCH
How do you calculate the NNTCH?
The calculation of the NNTCH is relatively straightforward.
Say we have a control group of 100 pupils
50 (50%) pupils score higher than the average score for the control group as a whole.
Let’s say we have an intervention group of 100 pupils
66 (66%) pupils in the intervention score higher than the average score for the control group (an effect size of d = 0.4 sd)
We now calculate out the reciprocal of the % (66) of pupils of the intervention group scoring higher than the average score in the control MINUS the % (50) of the pupils in the control group with an average score higher than then average for the control group
So the NNTCH = 1 / (66% - 50%) = 7 pupils ( technically 6.25 pupils and we round up to the nearest whole number).
That is, we need to teach 7 pupils if we want 1 pupil to benefit from the intervention.
Alternatively, we can think of the NNTCH along these lines. If we put 100 pupils through an intervention 16 pupils are more likely to experience a favourable outcome – i.e. score higher than then average in the control group – compared to if they were in the control group.
What’s the relationship between effect sizes and the NNTCH?
We can work out the relationship between effect sizes and the NNTCH by ‘mashing’ together two resources. First, we can use the work of Coe (2002) who produced a table showing the relationship between effect size and the percentile of an intervention group scoring above average of the control group. Two, we can use this table to create a number of simulations of interventions with different effect sizes and use an easily accessible online NNT calculators, to work out the associated NNTCH for a particular effect size. This then allows us to produce a table which shows the relationship between a particular effect size and the NNTCH
This now means that for a given effect size we can work out how many pupils will need to be exposed to the intervention in order to obtain a favourable outcome. For example, say we have intervention where there effect size is 0.2 SD, this means that 58% of the pupils in the intervention group have a score greater than the average score in the control group. In other words, if 100 pupils were to go through the intervention, 8 additional pupils are now likely to experience and a favourable outcome, compared to what it would have been had they been in the control group. If we then use the NNTCH calculator this converts to a NNTCH of 13 pupils
Calculating the cost per pupil of the intervention to aid decision-making
It seems to me that one of the major benefits of calculating the NNTCH is that it can help you make more informed judgments about the costs and benefits of an intervention. To help illustrate this I’m going to use an example based upon the EEF’s Thinking, Doing, Talking Science – Evaluation report - Hanley, Slavin, et al. (2016). Needless, to say I won’t go into the full details of the intervention, which can be found here – other than to say it’s focus was on making Y5 making science lessons in primary schools more practical, creative and challenging. Let’s now look at some of the ‘numbers’ associated with the intervention.
Intervention cost per school - £1000
Number of pupils per school involved 50
Average cost per pupil £20
Effect size of the intervention 0.22
Number Needed to Treat 13 (see table)
Number of pupils likely to benefit per school 4 (50/13 rounded up to the nearest whole number)
Average cost per pupil who now experience a more favourable outcome £250
So suddenly an intervention which appears to cost only £20 per pupil now costs £250 per pupil who benefit – so what appears to be a relatively cheap per intervention per student suddenly becomes a lot more costly, once you take into account the number of pupils who are likely to benefit from the intervention.
On the other hand, you might have a different intervention which at first glance appears more expensive per pupil – but maybe more cost-effective once when you take into account how many pupils benefit from the intervention.
Cost per school £1500
Number of pupils involved 50
Average cost per pupil £30
Effect size 0.4
Number Needed to Treat 6 (see table)
Number of pupils who are likely to benefit per school 8
Average cost per pupil who now experience a more favourable outcome £187.50
The benefit of this calculation is that although the second intervention would appear to be £500 more expensive – it would appear to be £62.50 cheaper per pupil who benefit from the intervention. In other words, this second intervention would appear to be more cost effective than a cheaper intervention with a larger NNTCH.
Some observations about the usefulness of the NNTCH
The NNTCH is relatively easily to calculate and is quite a simple of way of working out how many pupils need to be experience an intervention before single additional pupil experiences a more favourable outcome than if they had been in the control group. It also helps make a more realistic estimate of the average cost of the intervention per number of additional pupils who experience a more favourable outcome. As such, it provides it helps you calculate an additional reference points when trying to work out the cost/benefits of an intervention.
On the other hand, the NNTCH is not without its’ problems. It assumes that a favourable outcome is determined by additional pupils doing better than the ‘average’ in the control group. That said, a pupil in the intervention group may do better than they would have done in the control group but still not score better than the average of the control group. In addition, when calculating the NNTCH – confidence intervals for the NNTCH - should also be calculated. We also need to take into account that for some pupils the intervention may have a detrimental impact and indeed perform worse than if they had been in the control group. Indeed, in medicine attention is also paid to the Number Needed to Harm – when introducing new interventions.
The view you take on NNTCH will depend upon many factors, though with one being especially important i.e. what is your stance on the usefulness of effect sizes and the associated assumptions.
Coe, R. (2002). It's the Effect Size, Stupid: What Effect Size Is and Why It Is Important. Paper presented at the British Educational Research Association annual conference, 2002. 12. p.
Hanley, P., Slavin, R. and Elliott, L. (2016). Thinking, Doing, Talking Science: Evaluation Report and Executive Summary - Updated July 2016. London. Education Endowment Foundation
Higgins, S. (2018). Improving Learning: Meta-Analysis of Intervention Research in Education Cambridge Cambridge University Press.