The ongoing debate about the usefulness of effect sizes.

Last weekend saw researchED Durham -  #rEDDurham18 – provoke a debate on Twitter about the usefulness of effect sizes within education. This debate involved many individuals – including amongst others  @dylanwiliam, @Kris_Boulton,@HuntingEnglish @SGorard, @profbeckyallen, @dodiscimus, and @tpltd.  Now for the purposes of transparency I need to be upfront in my own role in provoking this debate as I had co-presented a session with Professor Adrian Simpson of Durham University at researchED Durham,  – where we argued that there all sorts of difficulties in using effect sizes as a measure of the effectiveness of an educational intervention and, which may have made a small contribution to the discussion on Twitter.

However, even when you are part of an online discussion or thread – especially on Twitter – the flow and complexity of a discussion can be hard to follow.  And as a result sometimes, the discussion can become a bit disjointed and go off in various directions – resulting in repeated articulation of individual’s competing claims – at the expense of the articulation of the various elements of the whole argument.  So with this in mind, I’m going to try and outline an argument – using Toulmin’s structure of arguments-  about the use of effect sizes in making decisions about what educational interventions to pursue.  This will hopefully, then allow you identify the key issues in the effect size and help you make your own mind up on the issue.

Toulmin and effect sizes.

 The philosopher Stephen Toulmin - Toulmin (2003) – identifies six components in the layout of an argument. 

·      The claim (C) or conclusion i.e.  the proposition at which we arrive as a result of our reasoning

·      The facts or grounds we appeal as a foundation for C, called grounds or data (D) i.e. the basis from which we argue, in other words, the specific facts relied on to support a claim

·      The warrant (W) which is the general rule that allows us to infer a claim. How do we go from D to C – proposition that provides justification – called the warrant (W) – provide a licence to the inference for doing from C to D.

·      Standing behind our warrant will be backing (B) – which is the body of experience and evidence that supports the warrant

·      The qualifier is a word or phrase e.g. presumably, possibly, probably - that indicates the strength conferred by the warrant

·      Rebuttals – are extraordinary or exceptional circumstances that undermine the force of the supporting grounds.

Figure 1 provides diagrammatic representation of the Toulmin structure of arguments and which is derived from Jenicek and Hitchcock (2005)

Screen Shot 2018-11-29 at 16.39.20.png

Next we need to articulate the argument for the use of effect sizes within the Toulmin structure. And we get something like this

Screen Shot 2018-11-29 at 16.43.03.png

As should be readily apparent this all comes down to the Warrant i.e. effect size measures the effectiveness of an intervention - and whether the warrrant is justified.  However, Adrian Simpson states (private correspondence)

(a)   ……larger effect size from a given study on intervention X than another given study on intervention Y only indicates that X is more effective than Y if

  1. The sample on which the intervention is trialled is equivalent

  2. The alternative treatment against which X and Y are compared are equivalent

  3. The outcome measure is the same

  4. The effect size measure is the same

And even then, one can only conclude “X is on average more effective than Y on that sample, compared to that alternative on that measure”

So where does this leave those colleagues who are interested in effect sizes and their usefulness in making decisions about what interventions to adopt or abandon within their schools.

1.     Under certain conditions it might be possible to conclude that on average intervention X is more effective than Y

2.     However, that judgment will depend very much on the quality and trustworthiness of how the research was carried out and whether it was suitable for the questions under investigation.   Gorard, See, et al. (2017) for a discussion of scale, attrition, data-quality and other threats to validity.

3.     If you are using a single study to explore whether a particular intervention might work within your context – there are a whole set of questions – that you need to ask before coming to a decision to proceed. For example see - Kvernbekk (2016).

a.     Can the intervention can play the same causal role here as it did there

b.     What were the support factors necessary for the intervention  to work in other settings

c.     Are the support factors available in your setting?

Effect sizes and meta-analyses

At this stage, we have yet to examine the place of effect sizes with meta-analyses, which involves another set of issues, and more than ably articulated by Wiliam (2016). However, of particular interest are the words of Gene  Glass – the creator of meta-analysis - who argues ‘the most important lessons that meta-analysis has taught us is that the impact of interventions is significantly smaller than their variability.’  Glass goes onto state: ‘Meta-analysis has not lived up to its promises to produce incontrovertible facts that would lead education policy. What it has done is demonstrate that average impacts of interventions are relatively small and the variability of impacts is great.’ (Glass, 2016). As such, context matters in significant ways yet there is little understanding of these contextual influences, with perhaps as much of 2/3 of the variance between studies being unexplained.  In other words, meta-analysis tells you much less than you might want it to – and you need to go back to the original studies to examine the role of both causal mechanisms and support factors.

And finally

Evidence-based practitioners have a major challenge of knowing when to trust the experts or so-called experts. For me, how effect sizes are discussed in presentations, reports or research-papers is an indicator of the trustworthiness/expertise of the authors/presenters. If effect sizes are discussed and there is no recognition or acknowledgment of the limitations of effect sizes, and under what circumstances they might allow you to draw a tentative conclusion - then this should be a warning sign about whether the material should be trusted. Be careful out there.

References

Glass, G. V. (2016). One Hundred Years of Research:Prudent Aspirations. Educational researcher. 45. 2. 69-72.

Gorard, S., See, B. and Siddiqui, N. (2017). The Trials of Evidence-Based Education. London. Routledge

Jenicek, M. and Hitchcock, D. (2005). Evidence-Based Practice: Logic and Critical Thinking in Medicine. United States of America. American Medical Association Press.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

Toulmin, S. E. (2003). The Uses of Argument. Cambridge University Press.

Wiliam, D. (2016). Leadership for Teacher Learning. West Palm Beach. Learning Sciences International.

The school research lead and managerial attitudes and perceived barriers to Evidence-Based Practice

Last week I was thrilled to see my newly published book - Evidence-based school leadership and management: A practical guide be described by David James in the TES as “A well-ordered and refreshingly honest guide to evidence-based practice.”   So taking ‘honesty’ as my cue, it seems sensible to look at some actual research on managerial attitudes and perceived barriers to the use of evidence-based practice (EBP). Barends, Villanueva, et al. (2017) who surveyed nearly 3000 managers  in Belgium, the Netherlands, the United States, the United Kingdom and Australia and found that only 27% of respondents indicated that they often based their decisions on scientific research, with only 14% indicating that they had ever read a peer-reviewed academic journal.

Other findings were that 49% of respondents believe that scientific research is relevant to managers and consultants, with 63% disagreeing with a statement that every organisation is unique and that findings would not apply to individual organisations.  A large majority of the respondents – 69% - had positive attitudes to EBP, with only a very small minority, 4%, having a negative attitude.  The majority of respondents – 58% - reported that the perceived lack of time to read research articles was the main barrier to using research findings, with 51% suggesting that managers and consultants have little understanding of academic research.  Other identified barriers EBP included organisational climate, accessibility of research, and a lack of awareness of how to access research.  Finally, there did not appear to be a link between attitudes towards EBP and age or professional experience, although education and research experience did appear to have a moderate positive effect. 

Of course any research study has limitations and as Barends, et al. (2017) themselves note their study is no exception.  Limitations identified included the sample in the study not being random but based on populations where individuals identified themselves as managers.  Response rates varied between countries with only 4% of the American sample responding, compared to 48% in the United Kingdom.  In addition, it was not clear from the respondents whether they knew the difference between the ‘management literature’ often found in airport bookshops and peer-reviewed research published in academic journals.  Finally, various definitions are used of the term manager are used in literature

So what lessons can those of us interested in EBP in schools learn from the study?  First, education is not alone and that leaders and managers in other fields and professional are also interested in using EBP.  On the other hand, school leaders should not put business managers on a pedestal,  as most respondents in the study report basing their decisions on personal experience (91%), intuition (64%.

Second, compared to the small minority of respondents in the study (27%) – a large majority (68%) of school leaders and a minority of teachers (45%) say they are   using research evidence to inform decision-making  https://www.suttontrust.com/research-paper/best-in-class-2018-research/ .  Given the pressures on schools to be more business-like, maybe it’s the wrong way round. Maybe businesses leaders and managers need to be more school-like. 

Third, the barriers faced by managers in the use of EBP – time, organisational climate, access to research and, research literacy – would not be unfamiliar to those involved in supporting the development of EBP within schools.  That said, given the efforts being made to support engagement with research in schools those barriers – although they exist – may not be as high. 

And finally

If you are looking for a more generic approach to evidence-based management then I recommend that you have a look at Barends and Rosseau (2018) Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions

PS

I’d like Professor Jonathan Haslam who commented on the first draft of this post

References

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page.

Barends, E., Villanueva, J., Rousseau, D. M., Briner, R. B., Jepsen, D. M., Houghton, E. and ten Have, S. (2017). Managerial Attitudes and Perceived Barriers Regarding Evidence-Based Practice: An International Survey. PloS one. 12. 10. e0184594.

 

Supporting teachers to be evidence-based practitioners - what do we know?

If you have any kind of interest in the development of evidence-based/informed practice (EBP) within schools, then this blogpost is for you.

Even with the worldwide interest in evidence-based practice (EBP) as a core concept within medicine and healthcare, the evidence on how best to teach evidence-based practice is weak.  In a recently published systematic review -  Albarqouni, Hoffmann, et al. (2018) – found that most  EBP educational interventions evaluated in controlled studies tended to focus on the critical appraisal of research evidence and did not use high quality instruments to measure the outcomes.

With this in mind, the rest of this post will examine the implications of the findings of the review for schools as they attempt to provide support and training to teachers in becoming better evidence-based practitioners.

What are the implications for evidence-based practice educational interventions within schools and other educational settings?

First, whereas in medicine there is a general understanding as to what is meant by evidence-based practice -  – Sackett, Rosenberg, et al. (1996) -  this is not the case in education.  As Nelson and Campbell (2017) argue: there is a little agreement over the  precise meaning of the term, in large part because of a lack of consensus as to whether; ‘research’ and ‘evidence’ are one and the same, for example? (Nelson 2014); are ‘evidence-based’ and

‘evidence-informed’ practices fundamentally different? (McFarlane 2015); and, perhaps the most intensely debated, ‘Whose evidence counts?’’  That said, as Professor Rob Coe stated at the February 2017 launch of event of the Chartered College of Teaching – agreeing a definition of evidence-based practice/ evidence-informed practice – should be possible.

Second, in medicine there would appear to be agreement about the five steps associated with being an evidence-based practitioner - Dawes, Summerskill, et al. (2005). These five steps include: translation of uncertainty into an answerable question;  systematic retrieval of best evidence available; critical appraisal of evidence for validity, clinical relevance, and applicability; application of results in practice; and, evaluation of performance.  On the other hand in education, at most there is agreement in evidence-informed practice involves multiple sources of evidence and the deployment of professional judgement, Nelson and Campbell (2017).

Third, given the nature of education there are going to real challenges for advocates of evidence-based practice within education to demonstrate impact on pupils outcomes.  As such, it make some sense to try and come up with validated instruments which can be used to measure teachers and knowledge, skills and attitudes towards EBP. The CREATE framework - Tilson, Kaplan, et al. (2011) – provides guidance on both the assessment domains and types of assessment.  This framework could easily be amended for use in an educational context as illustrated in Table 1 ( based on Tilson, Kaplan et als)

Screen Shot 2018-11-09 at 14.17.28.png

Fourth, given time, effort and money being put into EBP educational interventions – not just in IEE/EEF Research Schools – but in an increasing number of schools within England and across the world, perhaps attention should be given to developing guidelines on the reporting of EBP educational interventions, just as has been done in medicine,- GREET -  Phillips, Lewis, et al. (2016).   This is especially important as we know relatively little about the effective implementation of EBP educational interventions.  If studies under-report the details of the intervention – this will make it extremely difficult to bring together: what has been learnt; how to make the most of successes; and, avoiding unnecessary failures.

Fifth, my own experience of EBP educational interventions would suggest that there is a great deal of emphasis on both accessing and interpreting research evidence, with insufficient attention being given to the challenging process of assessing and aggregating differing sources of evidence – be it practitioner expertise, stakeholder views and school data.

And finally

I’ve always been a believer in success is a case of doing simple things well – or as Woody Allen says ‘eighty percent of success is showing up’.  Maybe in education we are not doing the doing the simple things well – which is making the most of what has been learnt in other disciplines.

Abstract - Albarqouni, L., Hoffmann, T. and Glasziou, P. (2018). Evidence-Based Practice Educational Intervention Studies: A Systematic Review of What Is Taught and How It Is Measured. BMC medical education. 18. 1. 177.

Background: Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions.

 Methods: We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of

the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used.

Results: Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. 

Conclusions: Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching.

References

Albarqouni, L., Hoffmann, T. and Glasziou, P. (2018). Evidence-Based Practice Educational Intervention Studies: A Systematic Review of What Is Taught and How It Is Measured. BMC medical education. 18. 1. 177.

Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A. and Osborne, J. (2005). Sicily Statement on Evidence-Based Practice. BMC medical education. 5. 1. 1.

Nelson, J. and Campbell, C. (2017). Evidence-Informed Practice in Education: Meanings and Applications. Educational researcher. 59. 2. 127-135.

Phillips, A. C., Lewis, L. K., McEvoy, M. P., Galipeau, J., Glasziou, P., Moher, D., Tilson, J. K. and Williams, M. T. (2016). Development and Validation of the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (Greet). BMC medical education. 16. 1. 237.

Sackett, D., Rosenberg, W., Gray, J., Haynes, R. and Richardson, W. (1996). Evidence Based Medicine: What It Is and What It Isn't. Bmj. 312. 7023. 71-72.

Tilson, J. K., Kaplan, S. L., Harris, J. L., Hutchinson, A., Ilic, D., Niederman, R., Potomkova, J. and Zwolsman, S. E. (2011). Sicily Statement on Classification and Development of Evidence-Based Practice Learning Assessment Tools. BMC medical education. 11. 1. 78.

 

Why instructional coaching may not be the answer for everyone.

Instructional coaching blog

Thursday 15 November sees the Teacher Development Trust hold a one day conference on coaching and how it could/can be used to drive school performance.  Unfortunately, as I am not able to attend the conference I thought I’d better use some time  to do a bit of reading around the subject of coaching and instructional coaching.  This seemed particularly sensible as @DrSamSims has  recently described instructional coaching as the best evidenced form of CPD.  Subsequently,  I stumbled across the work of  Jacobs, Boardman, et al. (2018) who undertook a research investigation to understand teacher resistance to instructional coaching.   As such the rest of this post will:

·      Offer a definition of instructional coaching

·      Provide the abstract of Jacobs et al’s research.

·      Undertake a review of the research using the 6 A’s framework (see https://www.garyrjones.com/blog/2018/10/10/how-can-we-trust-research or http://www.cem.org/blog/how-can-we-trust-research/ )

·      Consider the implications schools wishing to support colleagues’ professional learning and development. 

Definition – Instructional Coaching

Put simply – instructional coaching involves a trained expert working  - be it an external coach, leader teacher or peer -  with teachers individually, to help them learn and adopt new teaching practices, and to provide feedback on performance.  This is done with the intent to both support accurate and continued implementation of new teaching approaches and reduce the sense of isolation teachers can feel when implementing new ideas and practices. 

Abstract

Research provides strong support for the promise of coaching, or job embedded professional development, particularly on improving teachers’ classroom instruction. As part of a comprehensive professional development model, 71 middle school (grades 6–8) science, social studies, and language arts teachers were assigned to an instructional coach to support their required use of a multicomponent reading comprehension approach, Collaborative Strategic Reading. In this study, we sought to better understand the factors that influence responsiveness to coaching, focusing in particular on teachers who appeared the least receptive to collaborating with a coach to support the implementation of a new practice. Results highlight the patterns and complexities of the coaching process for 20% of the teachers in our sample who were categorized as resistant to coaching, suggesting that the one-on-one model of coaching offered in this study may not be the best fit for all teachers.

 

Where's the evidence for evidence-based practice improving pupil outcomes?

 A few weeks ago in an online  discussion with Dr David James – Deputy Head Academic at Bryanston School –  David posed the  following question: Where is the evidence that evidence-based practice has a measurable impact on learning and outcomes? In other words, which schools can point to exam results and say they have improved because of evidence-informed practice? In other words, where is the backing for the claim that schools and teachers should use evidence – and particularly research evidence – to inform practice?  As otherwise, all we have is the assertion that the use of evidence is a good thing.

Unfortunately, at the moment, there is relatively little, if any, evidence that the use of research evidence by teachers will improve pupil outcomes,  Rose, Thomas, et al. (2017).  However, this may change with the forthcoming EEF evaluation of the RISE project, which was run out of Huntington School and which is due to be published in early 2019.  Indeed, where evidence is available about the outcome of teacher use of research evidence it relates to the positive impact it has on teachers,  Cordingley (2015) and  Supovitz (2015).  So it is within this context, that I read with an interest a recently published systematic review - Simons, Zurynski, et al. (2018) – on whether evidence-based medicine training improve doctors knowledge, practice and patient outcomes, which concludes: EBM training can improve short-term knowledge and skills among medical practitioners, although the evidence supporting this claim comes from relatively poor-quality evidence with a significant risk of bias. There is even less evidence supporting the claim that EBM training results in changes in clinicians’ attitudes and behavior, and no evidence to suggest that EBM training results in improved patient outcomes or experiences. There is a need to improve the quality of studies investigating the effectiveness of EBM training programs. (p5)

 Now, if you are an advocate of evidence-based education this may appear to be quite depressing.  If medicine, where evidence-based practice first originated, has not been able to provide evidence-based medicine training to doctors which improves patient outcomes – then what chance do we have in education of being able to train teachers and leaders to use evidence-based practice to improve pupil outcomes?  Well my own view is that we may be able to learn lessons from evidence-based medicine, which will then help us create the conditions for success within education.  That does not mean this is a given, we need to learn the right lessons, adapt them in the right way for education and then implement them in such a way which allows significant adaptation within a local context.  So to start this process, I am going to take the practice points identified by Simons, et al. (2018) and comment on their potential applicability within education and their implications they may have different ‘players’ within education based education eco-system.

The Practice Points from the systematic review

The EBM practice landscape is changing with more emphasis on patient participation, including shared decision-making.

Most doctors benefit from EBM training that is integrated into their clinical practice, and where institutional support is evident.

Whilst EBM courses for doctors demonstrate short-term improvements in knowledge, there is no strong evidence linking EBM training to changes in clinical practice or patient outcomes.

It is important to investigate whether EBM training leads to improvements in doctors’ practice behaviors that may also facilitate changes in patient outcomes and experiences.

It may be possible to use reliable measures of clinical practice and patient experiences to evaluate EBM training, such as structured practice portfolios, patient experience surveys and multi source feedback. (p1)

Implications and discussion

First, given the challenges that medicine appears to be having in getting training for evidence-based medicine to work with doctors – then maybe we should not be too surprised if our first efforts to provide training for teachers in evidence-based practice do not lead to improvements in pupil outcomes.  This in turn may require us, in the short-term to reduce our expectations about the what the training of teachers  and leaders to use evidence-based practice can achieve. 

Second, given the changes in the evidence-based medicine landscape and the increased focus on patient participation and informed decision-making – all those involved in evidence-based practice within schools, may need to give consideration to the role of pupils, teachers, parents and other stakeholders in evidence-based decision-making.

Third, training designed to support the use of evidence-based practice within schools will need to be sustained.  It’s highly unlikely that training provided in ITT or professional learning is going to ‘deliver’ evidence-based practice within schools.  Rather it is going to require an ongoing and sustained effort and cannot be just a short-term fad or this year’s priority.  This is particularly important when considering the both impact of the EEF/IEE Research Schools programme and its future development, as it maybe the underpinning model needs radical re-modelling. 

Four, if you are school leader and want to encourage evidence-based practice within your school – then you need to make sure sufficient support is in place to build the capacity, motivation and opportunities necessary for evidence-based practice,  Langer, Tripnet, et al. (2016). 

Five, given that EEF evaluations of interventions include both process and impact evaluations it may be that medicine has much to learn from education about evaluations using multiple sources of evidence.  On the other hand, Connolly, Keenan, et al. (2018) report that over 60% of randomised controlled trails within education tended to ignore both the context within which the intervention took place and the experience of participants.

And finally

It’s important to remember that when trying to evaluate the impact of any intervention on examination results that around 97% of the variation in performance between year groups can be explained by changes in the cohort and how well do individuals do ‘on the day,’ Crawford and Benton (2017).  So the impact of examination results from teachers being trained in evidence-based practice is likely to be relatively small.  Indeed, it’s not enough to look at one year’s examination results, results will be need to reviewed and evaluated over a number of years.

References

Connolly, P., Keenan, C. and Urbanska, K. (2018). The Trials of Evidence-Based Practice in Education: A Systematic Review of Randomised Controlled Trials in Education Research 1980–2016. Educational Research.

Cordingley, P. (2015). The Contribution of Research to Teachers’ Professional Learning and Development. Oxford Review of Education. 41. 2. 234-252.

Crawford, C. and Benton, T. (2017). Volatility Happens: Understanding Variation in Schools’ Gcse Results : Cambridge Assessment Research Report. Cambridge, UK.

Langer, L., Tripnet, J. and Gough, D. (2016). The Science of Using Science : Researching the Use of Research Evidence in Decision-Making. London. EPPI Centre, S. S. R. U., UCL Insitute of Educations, University College of London.

Rose, J., Thomas, S., Zhang, L., Edwards, A., Augero, A. and Roney, P. (2017). Research Learning Communities : Evaluation Report and Executive Summary December 2017. London.

Simons, M. R., Zurynski, Y., Cullis, J., Morgan, M. K. and Davidson, A. S. (2018). Does Evidence-Based Medicine Training Improve Doctors’ Knowledge, Practice and Patient Outcomes? A Systematic Review of the Evidence. Medical teacher. 1-7.

Supovitz, J. (2015). Teacher Data Use for Improving Teaching and Learning. In Brown, C.  Leading the Use of Research & Evidence in Schools.  London. Bloomsbury Press.