The school research lead, improvement research and implementation science



This week saw the welcome announcement of the appointment of Dr Becky Allen as the director of the UCLIOE’s Centre for Education Improvement Science.  On appointment, Dr Allen wishes to help develop “a firmer scientific basis for education policy and practice” and drawing on methods such as laboratory experiments and classroom observation.

Now regular readers of this blog will know that I have often expressed a concern over how educational researchers often misuse terms associated with evidence-based practice.  So, given this new initiative in improvement science it seems sensible look at a definition of improvement science/research and to do this, I’ll use the work of (LeMahieu et al., 2017)

Improvement Research : a definition (LeMahieu et al., 2017)

Improvement research is … about making social systems work better. Improvement research closely inspects what is already in place in social organizations – how people, roles, materials, norms and processes interact. It looks for places where performance is less than desired and brings tools of empirical inquiry to bear and to produce new knowledge about how to remediate the undesirable performance. Put simply, improvement research is not principally about developing more “new parts” such as add-on programs, innovative instructional artifacts or technology; rather, it about making the many different parts that comprise an educational organization mesh better to produce quality outcomes more reliably, day in and day out, for every child and across the diverse contexts in which they are educated.

Examples of Improvement Research/Science

  1. Networked Improvement Communities;
  2. Design-Based Implementation Research;
  3. Deliverology;
  4. Implementation Science;
  5. Lean for Education;
  6. Six Sigma;
  7. Positive Deviance
As such, (LeMahieu et al., 2017) state that All seven of the approaches  ……. share a strong “common core”. All are in a fundamental sense “scientific” in their orientation. All involve explicating hypotheses about change and testing these improvement hypotheses against empirical evidence. Each subsumes a specific set of inquiry methods and each aspires transparency through the application of carefully articulated and commonly understood methods – allowing others to examine, critique and even replicate these inquiry processes and improvement learning. In the best of cases, these improvement approaches are genuinely scientific undertakings

In other words, improvement research is a form of ‘disciplined inquiry’ (Cronbach and Suppes, 1969)

What Improvement Science Is Not?

However,  as (LeMahieu et al., 2017) note a major distinguishing feature of  improvement research, is what it does not attempt to do.  Improvement research is not about creating new theories or research and development.  Nor is about seeking to evaluate existing teacher strategies, interventions of field-based trials.   Rather improvement science is about doing more of what works, stopping what doesn’t and making sure everything is joined up in ways which bring about improvements in a particular setting

Given this stance, then statements about the Centre for Education Improvement Science (CEISbeing about ‘laboratory experiments and classroom observations’ seem a little incongruent with the existing work in the field.

My confusion about the work of the CEIS is further compounded by mention in Schools Week where it describes Improvement Science London, which is also based at UCL, improvement science involves the recognition of “the gap between what we know and what we put into practice” and using the “practical application of scientific knowledge” to identify what needs to be done differently.   However, that could probably more accurately be described as ‘implementation science’ (a subset of improvement science admittedly).  So, let’s delve into a little more detail about what is meant by the ‘implementation science.

What is implementation science?

(Barwick, 2017) defines Implementation science (as) the scientific study of methods that support the adoption of evidence based interventions into a particular setting (e.g., health, mental health, community, education, global development).  Implementation methods take the form of strategies and processes that are designed to facilitate the uptake, use, and ultimately the sustainability – or what I like to call the ‘evolvability’ – of empirically-supported interventions, services, and policies into a practice setting (Palinkas & Soydan, 2012 ; Proctor et al., 2009); referred to herein as evidence-based practices (EBPs).

Barwick goes onto state that Implementation focuses on taking interventions that have been found to be effective using methodologically rigorous designs (e.g., randomized controlled trials, quasi-experimental designs, hybrid designs) under real-world conditions, and integrating them into practice settings (not only in the health sector) using deliberate strategies and processes (Powell et al., 2012 ; Proctor et al., 2009; Cabassa, 2016).  Hybrid designs have emerged relatively recently to help us explore implementation effectiveness alongside intervention effectiveness to different degrees (Curran et al,  2012).

As a consequence – implementation science sits on the right hand side of the following figure (taken from (Barwick, 2017))




So where does this leave us?

Well on the one hand, I am really excited that educational researchers are beginning to pay attention being done in field such as improvement and implementation science.  On the other hand, I’m a bit disappointed that we are likely to make the same mistakes as we have with evidence-based practice, and not fully understand the terms we have borrowed. 

Finally – this post may be completely wrong as I have relied on press releases and press reports to capture the views of the major protagonists – as such I may be relying on ‘fake news.’

References

BARWICK, M. 2017. Fundamental Considerations for the Implementation of Evidence in Practice. MelanieBarwickJourneysInImplementation [Online]. Available from: https://melaniebarwick.wordpress.com/ [Accessed 15 November 2017].

LEMAHIEU, P., BRYK, A., GRUNOW, A. & GOMEZ, L. 2017. Working to improve: seven approaches to improvement science in education. Quality Assurance in Education, 25, 2-4.

The effectiveness of lesson study has been called into question, following a £543,000 study at 181 schools, and 12,200 pupils made no difference to Y6 pupils attainment in reading and mathematics

The effectiveness of lesson study has been called into question, following a £543,000 EFF study involving 181 schools and 12,200, pupils found it made no difference to Y6 pupils attainment in reading and mathematics.

Lesson Study is a CPD approach originating in Japan that has become more popular in England in recent years and is a collaborative approach to professional learning.  Simply put, lesson study is a joint practice development approach to teacher learning, in which teachers collaboratively plan a lesson, observe it being taught and then discuss what they have learnt about teaching and learning

The project found no evidence that a particular version of Lesson Study improves maths and reading attainment at KS2.  However, there is evidence that some control schools implemented similar approaches to Lesson Study, such as teacher observation. As such the trial might, therefore, underestimate the impact of Lesson Study when introduced in schools with no similar activity. 

So does this EFF report sound the ‘death-knell’ for Lesson Study in England.  David Weston, Chief Executive of the Teacher Development Trust states in the TDT blog

There are some possible options.

1. If we decided to ignore the above and assume that the pedagogical content was effective, then either:
a. Lesson Study is an ineffective mechanism in all cases, or
b. it was an ineffective mechanism in this particular case
2. If we were determined to conclude that Lesson Study is always effective (which is also not plausible), then we would conclude:
a. This implementation is flawed, or
b. This pedagogical content is definitely bad.

My suggestion would be that none of the above conclusions are supported, in my view, by any reasonable reading of this study and the wider evidence base. We also need to question the extent to which we can draw any strong conclusions from a study where so many in the control group appeared to be engaging in similar practice

However, a report on peer lesson observation published by the EEF at the same time indicated that peer observation led to no overall improvement in combined maths and English GCSE scores for pupils of the teachers involved.  This would suggest the concerns about that the control group in the Lesson Study evaluation were enjoying improvements in pupil outcomes, and offsetting the impact of Lesson Study are possibly not warranted.

So what are school leaders and research leads to do.  First, if you are thinking about implementing Lesson Study it would be worth remembering there is more than one variety of Lesson Study.  In particular, I would recommend that you have a look at the work of Sarah Selezynov of the UCLIOE who identifies seven components of Japanese Lesson Study as this will allow you to  make comparisons between for want of a better phrase ' the original and cheap imports'.  

Second, and this is more generic advice, it's worth turning to the work of ( Miller et al., 2004) state when critically examining whether to implement change or changes, which appear to be fashionable,  school leaders and school research leads, could usefully ask themselves the following questions.

What evidence is there that the new approach can provide productive results. Are arguments based on solid evidence from lots of schools followed over time?
Has the approach worked in schools similar to our own that face similar challenges?
Is the approach relevant to the priorities and strategies relevant to our school?
Is the advice specific enough to be implemented? Do we have enough information about implementation challenges and how to meet them?
Is the advice practical for our school given our capabilities and resources?
Can we reasonably assess the costs and prospective rewards (Amended from (Miller et al., 2004) pp 14-15

If the answers to these questions suggest positive outcomes, it may well be that school may have identified a change which has ‘legs’.  

And finally, if there is one lesson to come out of this discussion, it is that school leaders need to actively engage in evidence-based school leadership.  Failure to do so, will lead to resources being misused, time being wasted, workloads increasing and pupils not making the progress they deserve.

Reference
MILLER, D., HARTWICK, J. & LE BRETON-MILLER, I. 2004. How to detect a management fad—and distinguish it from a classic. Business Horizons, 47, 7-16

School Research Lead - From evidence to implementation

During this Thursday's #UKEdReschat there was a lively discussion hosted by @StuartKime which focussed on the implementation of research and the ingredients associated by with successful implementation

However, for me this cycle of implementation misses out a fundamental step - how to bring together all  the evidence and then make a decision about how to proceed. For as (Alonso-Coello et al., 2016) 
Often the process that decision-makers used, the criteria that they consider and the evidence that they used to reach their judgments are unclear.  They may omit important criteria, give undue weight to some criteria, or not use the best available evidence.  Systematic and transparent systems for decision-making can help to ensure that all important criteria are considered and that the best available research evidence informs decisions.  P1

Adopting some form of 'evidence' framework can have a number of benefits for school leaders. (Alonso-Coello et al., 2016) subsequently identify a number of such benefits, which I have adapted for the use in a school-setting.
  • You and your fellow decision-makers, will have an improved understanding of the advantages and disadvantages of the various actions being proposed
  • Helping ensure that you include all important criteria in the decision-making process
  • Providing you with a concise summary of the all best available evidence – be it research evidence, school data, stakeholder views and practitioner expertise
  • Helping colleagues will be in a better position to understand the decisions by senior leadership teams and the evidence supporting those decisions

As mentioned in a previous post  evidence-based school leaders make explicit the criteria they use to make a decision.  In the context of your school, these criteria may well change depending on what domain and sub-domain of school leadership and management you are concerned with (Neeleman, 2017).  The criteria for making decisions about teaching and learning – may well be different to the criteria you apply to making financial decisions.  In addition, you may want to take into account whether criteria are adjusted for different parts of the organisation.  In the context of your school, the criteria being applied at say level of the Board of MAT Trust, may well be different to how the criteria are applied at Key Stage 1 in a primary school.    Using (Alonso-Coello et al., 2016) as a starting point, let’s look at some of the criteria that could be applied to decision-making (see Figure 1)

 Figure 1 Evidence to Decision Template


Element
Criteria

Priority of the problem
Is the issue an important problem for which a remedy is sought and that can be locally implemented?

Benefits
How substantial are the desirable anticipated effects?

Costs
How substantial are the undesirable anticipated effects?

Certainty of the evidence
How robust and secure are the different sources  - research, practitioner expertise, stakeholder views and school data - of evidence?   


Balance
Does the balance of the desirable and undesirable effects favour the intervention or the comparator?

Resource use
How large are the resource requirements – attention, time, money, professional learning?

Does the balance of costs and benefits favour the intervention or the comparator?

Equity
What impact does the decision have on educational equity?  Will it help close gaps in attainment?

Are there important ethical issues which need to be taken into account?

Acceptability
Are there key stakeholders – teachers, parents, trustees, who would not accept the distribution of the benefits, harms and costs?

Would the intervention adversely affect the autonomy of teacher, department, school or MAT?

Feasible
Are there important barriers that are likely to limit the feasibility of implementing the intervention (option) or require consideration when implementing it?

Is the intervention or strategy sustainable?

Additional comments and recommendation




Furthermore, application of the above framework needs to be seen in the context of the strength or otherwise of the evidence - be it research, practitioner expertise, school data or stakeholder views.  However, that is another discussion which I will explore in a future post.

And finally

Having reflected on the EEF school improvement cycle - it seems to me insufficient attention is being paid to how you turn evidence into a decision and the processes necessary to support evidence-based decision-making - and this represents a fundamental flaw in the five stage process put forward.  Evidence-based decision making is so much more sophisticated than a simple model of priorities, external research, implementation, evaluation and mobilisation - and involves critical appraisal of multiple sources of evidence, aggregation of that evidence, and subsequent integration of the evidence into the decision making process.


References


ALONSO-COELLO, P., OXMAN, A. D., MOBERG, J., BRIGNARDELLO-PETERSEN, R., AKL, E. A., DAVOLI, M., TREWEEK, S., MUSTAFA, R. A., VANDVIK, P. O., MEERPOHL, J., GUYATT, G. H. & SCHÜNEMANN, H. J. 2016. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 2: Clinical practice guidelines. BMJ, 353.


NEELEMAN, A.-M. 2017. Grasping the scope of school autonomy: a classification scheme for school policy practice  Belmas. Stratford on Avon, England 


The school research lead and the evidence-based pep talk

As a school research lead you may often be called upon to give colleagues a ‘pep talk’ about the importance of research and evidence to your school. As such, it seems sensible to look at the research and evidence about motivating colleagues through the use of pep talks.  So this post will look at the recent work of (McGinn, 2017) who draws upon (Mayfield, Mayfield, & Kopf, 1995, 1998) and their research into Motivating Language Theory.

Motivating Language Theory (MLT)

MLT suggests there are three elements of motivating language, which once fully understood can be used to give more ‘motivating’ pep talks.  These elements are:

Uncertainty reducing language – where ‘leaders provide information about how precisely how to do the task at hand, giving easily understandable instructions, good definitions of tasks, and detail on how performance will be evaluated.’

Empathetic language by showing concern for audience as human-beings by including ‘praise, encouragements, gratitude, and acknowledgement of a task’s difficulty.’

Meaning making language ‘this explains why a task is important.  This involves linking the organisation’s purposes or mission to listener’s goals.  Often meaning making-language includes the use of stories …. of how the work has made a real difference in the lives of customers or the community.’ (McGinn, 2017) p. 134

As (McGinn, 2017) notes a good pep talk given to either a group or an individual will contain aspects of all three elements.  However, getting the right mix will depend on the context and who is in your audience, and how well you know them.  

What are the implications for the school research lead?

First, it is really important that you are in command of both terminology and processes– that you can explain the difference between research, practitioner inquiry and evidence-based practice.  In other words, what is it that you are asking them to do?  This all means when you talk about research engagement or research involvement that you can provide practical examples of the differences between the two.  If colleagues want to be ‘research engaged’ they are given very clear guidance about how they can go about it – which probably involves some very small but clearly achievable task which is directly relevant to their teaching.

Second, understand that for many colleagues ‘research’ is scary stuff.  They may not have read any educational research in years – they might not know what is meant by effect size and may be far more concerned about teaching Y11 Group C on a Friday afternoon.  Acknowledge, that becoming research engaged will take time and effort and that to create the time and space for research, specific actions are being undertaken to reduce work-load.  My own view – is that for every new initiative you start with colleagues – you should be looking to get rid of at least two if not three other tasks which colleagues are expected to complete.  In other words, subtract tasks before you add tasks.  Furthermore, if colleagues have done a great job in say developing a journal club – thank them.

Third, connect to why research and evidence is important and central to the work of the school.  Research and evidence is essential if we are going to provide the best possible education for our pupils and so that they can have the best possible life-chances.  Research and evidence is vital if we are going to make use of our most scare resource – colleagues time.  Research and evidence is vital if we are going to make decisions to protect those activities that really matter, especially at a time of financial pressure.  Research and evidence, needs to be part and parcel of our own professional development if we are to learn and progress throughout our careers in education.  Research and evidence is a prerequisite if we are going to keep up with the latest developments both in our subjects and in teaching, especially given how quickly knowledge can depreciate and become out of date.  Research and evidence, to help us challenge our own biases and prejudices – and to make us just stop and think and reflect, that you know what, I might just be wrong.

References

Mayfield, J., Mayfield, M., & Kopf, J. (1995). Motivating Language: Exploring Theory with Scale Development. The Journal of Business Communication (1973), 32(4), 329-344. doi:doi:10.1177/002194369503200402
Mayfield, J., Mayfield, M., & Kopf, J. (1998). The effects of leader motivating language on subordinate performance and satisfaction. Human resource management, 37(3), 235-248.
McGinn, D. (2017). The science of pep talks. Harvard business review, 95(4), 133-137.


The School Research Lead and the 'backfire effect' - should you be worried?


One of the challenges faced by school research leads is the need to engage with colleagues who have different views about the role of evidence in bringing about improvement.  Indeed, these different views are not likely to be restricted just to the role of evidence, they are also likely to include differing views about the research evidence itself.  What’s more in a widely cited article (Nyhan & Reifler, 2010) show how  attempts to  correct misconceptions through the use of evidence frequently fail to to reduce the misconceptions held by a target-group.  Indeed, these attempts at correcting misconceptions may inadvertently lead to increasing misconceptions in the target-group i.e the so-called back-effect

Now if there is a ‘backfire effect this could have profound implications for both evidence-based school leaders and school research leads as they attempt to engage in dialogue to correct the misconceptions which may be held by colleagues about research.   This is extremely important as it is necessary to know whether it is possible to engage in constructive dialogue where misperceptions can be corrected.   If this is not the case then school research leads will need to give careful consideration to how they go about disseminating scholarly research, as it may lead to major opinion formers within a school having an even less favourable view of research as a means of bringing about improvement.

However, there may be an even bigger issue -  the ‘backfire effect’ may not exist at all, and even if it does it may well be the exception rather than the norm.  In a peer-reviewed paper, (Wood & Porter, 2016) present results from four experiments involving over 8000 subjects, and found that on the whole individuals tended to take on board factual information even if this information challenges their partisan and ideological commitments. 

What are the implications for you, as you attempt to develop a school climate and culture based of evidence use.  

First, as (Wood & Porter, 2016) noted the back-fire effect appeared to be a product of question wording, so this would suggest that it’s important to really think through how information is presented to colleagues and how subsequent questions are phrased.  

Second, Wood and Porter note that in general respondents tend to shy away from cognitive effort and will deploy strategies to avoid it.  Whereas as the backfire effect relies on substantial cognitive effort by developing new considerations to offset the cognitive dissonance generated by the new information.    However, the research which has identified the back-fire effect often took place in university settings where the respondents, be it students or teaching staff often take great delight in cognitive effort.  Indeed, the school staff room may have a number of similarities with experiments taking place in university settings.  As such, schools may be particularly prone to seeing a disproportionate number of incidents to the ‘back-fire effect.  

Third, Wood and Porter note that their findings are not without limitations, for example, just because individuals have been presented with information to address their misconceptions, does not mean that that this information has been retained.    


And finally, it’s important to note that even when relatively new ideas and concepts breakout from the academy and reach the public domain, that does not mean they should be taken as ‘gospel’ but rather should be seen as something which has more than surface plausibility.  That said, even when things are plausible that does not mean it is the only explanation for what is taking place.

References

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.

Wood, T., & Porter, E. (2016). The elusive backfire effect: Mass attitudes' steadfast factual adherence.