What to do when faced with a ‘tsunami’ of expert-opinion

As we approach the end of the academic year the educational conference season appears to be in full-swing with attendees and delegates  being faced with a ‘tsumami’ of expert opinion, for example, last weekend saw ResearchED Rugby.  Indeed, over the next few weeks I will be making my own contribution to that ‘tsumani’ by speaking at the Festival of Education, Wellington College, ResearchSEND at the University of Wolverhampton and the Hampshire Collegiate  school’s teaching and learning conference.   This got me thinking about ‘expert opinion’ and under what circumstances should the opinions expressed by so-called expert speakers at conferences be accepted.  By speaking at a conference I am asking colleagues – if not to accept  my so-called ‘expert’ opinion -  to spend some of their precious time thinking about what I have to say.  On the other hand, I will also be listening to speakers at the conferences, so the question I have to ask myself – particularly if I don’t know much about the speaker’s subject or the speaker – under what circumstances should I accept their expert opinion.

Accepting expert opinion

Over recent weeks I have been exploring the role of research evidence in practical reasoning and in doing so I have across the work of Hitchcock (2005) who cites the work of Ennis (1962) pp. 196-197.  Ennis has dentified seven tests of expert opinion

1.6.1) The opinion in question must belong to some subject matter in which there is expertise. An opinion can belong to an area of expertise even if the expertise is not based on formal education; there are experts on baseball and on stamps, for example.

1.6.2) The author of the opinion must have the relevant expertise. It is important to be on guard against the fallacy of ‘expert fixation’, accepting someone’s opinion because that person is an expert, when the expertise is irrelevant to the opinion expressed.

1.6.3) The author must use the expertise in arriving at the opinion. The relevant data must have been collected, interpreted, and processed using professional knowledge and skills.

1.6.4) The author must exercise care in applying the expertise and in formulating the expert opinion.

1.6.5) The author ideally should not have a conflict of interest that could influence, consciously or unconsciously, the formulated opinion. For example, the acceptance of gifts from the sales representative of a pharmaceutical company can make a physician’s prescription of that company’s drug more suspect.

1.6.6) The opinion should not conflict with the opinion of other qualified experts. If experts disagree, further probing is required.

1.6.7) The opinion should not conflict with other justified information. If an expert opinion does not fit with what the reasoner otherwise knows, one should scrutinize its credentials carefully and perhaps get a second opinion.

Accepting expert opinion at conferences

So what are the implications for Ennis’s seven tests of expert opinion for both the giving or receiving expert opinion.  Well if you are an attendee at a conferences and listening to so-called experts, it seems to me that you should be asking the following questions. 
  • Is the speaker talking about a subject they have ‘expertise in’, or are they speaking because they are deemed to be an ‘expert of some kind’?  You may also to make the distinction between experience and expertise, as the two should not be conflated. In other words, just because someone has experience in doing something does not automatically make them an expert in that subject
  • Does the speaker make it clear that there are limitations with what they are proposing or putting forward  If they don’t, then that’s a real ‘red-flag’ as very little in education is not without limitations or weaknesses.
  • In all likelihood there are likely to be an alternative perspective on the speakers’ topic, so what they say will not be the last word on the matter.  You’ll certainly have to do some further reading or investigation before bringing it back to your school as a solid proposal.  Does the presenter makes suggestions where to look?
  • What are the speakers ‘interests’ in putting forward this point of view, be it, reputational, financial or professional?
  • Just because an expert’s view disagrees with you own experience, that does not invalidate your experience – it just means you need to get a second or third opinion.
  • Does the speaker present a clear argument – do they lay out clearly the components of an argument; be it the data, claim, warrant and supporting evidence. 
  • Is the speaker more concerned with ‘entertaining’ you with flashy slides rather than helping you think these the relevant issues for yourself.
What does Ennis’s 7 conditions mean for ‘experts’ making presentations?
  • Presenters need to be humble and acknowledge the limits of their expertise and be aware of  projecting ‘false-certainty’ in the strength of their arguments
  • Make sure their slides or whatever format they have used for their presentation specifically mentions the limitations of the presenter’s argument.
  • Include a list of references backing the counter-arguments laid out in 2 above.
  • Declare any ‘conflicts of interests’ that they  may have which might influence their  presentation
  • Think long and hard about getting the balance right between providing and ‘education’ and being ‘entertaining.’  
And finally


Attendance at a conference is often a great day-out away from the hassle of a normal working day.  Ironically, if a conference does not lead to substantive additional work-load in terms of additional reading and inquiry, then attendance at the conference will have been an entertaining and nice day-out but that’s all it will have been.

PS 
If you see and hear and me speak at a conference and I don't live up to these principles - please let  me know

Guest post - Meta-analysis: Magic or Reality, by Professor Adrian Simpson

Recently I had the good fortune to have an article published in the latest edition of the Chartered College of Teaching’s journal Impact in which I briefly discussed the merits and demerits of meta-analyses, Jones (2018).  In that article I lent heavily on the work of Adrian  Simpson (2017) who raises a number of technical arguments against the use of meta-analysis.   However, since then a blog post written by Kay, Higgins, and Vaughan (2018) has been published on the Evidence for Learning website, which seeks to address the issues raised in Simpson’s original article about the inherent problems associated with meta-analyses. In this post Adrian Simpson responds to the counter-arguments raised on the Evidence for Learning website.

Magic or reality: your choice, by Professor Adrian Simpson, Durham University

There are many comic collective nouns whose humour contains a grain of truth. My favourites include "a greed of lawyers", "a tun of brewers" and, appropriately here, "a disputation of academics". Disagreement is the lifeblood of academia and an essential component of intellectual advancement, even if that is annoying for those looking to academics for advice. 

Kay, Higgins and Vaughan (2018, hereafter KHV) recently published a blog post attempting to defend using effect size to compare the effectiveness of educational interventions, responding to critiques (Simpson, 2017; Lovell, 2018a). Some of KHV is easily dismissed as factually incorrect: for example, Gene Glass did not create effect size: Jacob Cohen wrote about it in the early 1960s; the toolkit methodology is not applied consistently: at least one strand [setting and streaming] is based only on results for low attainers while other strands are not similarly restricted (that is quite apart from most studies in the strand being about within-class grouping!)

However, this response to KHV is not about extending the chain of point and counter-point, but to ask that teachers and policy makers check arguments for themselves: Decisions about using precious educational resources needs to lie with you, not with feuding faculty. The faculty need to state their arguments as clearly as possible but readers need to check them: if I appeal to a simulation to illustrate the impact of range restriction on effect size (which I do in Simpson, 2017), can you repeat it - does it support the argument? If KHV claim the EEF Teaching and Learning toolkit use ‘padlock ratings’  to address the concern about comparing and combining effect sizes from studies with different control treatments, read the padlock rating criteria – do they discuss equal control treatments anywhere? Dig down and choose a few studies that underpin the Toolkit ratings – do the control groups in different studies have the same treatment?

So, in the remainder of this post, I invite you to test our arguments: are my analogies deceptive or helpful? Re-reading KHV’s post, do their points address the issues or are they spurious?

KHV’s definition of effect size shows it is a composite measure. The effectiveness of the intervention is one component, but so is the effectiveness of the control treatment, the spread of the sample of participants, the choice of measure etc. It is possible to use a composite measure as a proxy for one component factor, but only provided the ‘all other things equal’ assumption holds.

In the podcast I illustrated the ‘all other things equal’ assumption by analogy: when is the weight of a cat a proxy for its age? KHV didn’t like this, so I’ll use another: clearly the thickness of a plank of wood is a component of its cost, but when can the cost of a plank be a proxy for its thickness? I can reasonably conclude that one plank of wood is thicker than another plank on the basis of their relative cost only if all other components impinging on cost are equal (e.g. length, width, type of wood, timberyard’s pricing policy) and I can reasonably conclude that one timberyard on average produces thicker planks than another on the basis of relative average cost only if those other components are distributed equally at both timberyards. Without this strong assumption holding, drawing a conclusion about relative thickness on the basis of relative cost is a misleading category error.

In the same way, we can draw conclusions about relative effectiveness of interventions on the basis of relative effect size only with ‘all other things equal’; and we can compare average effect sizes as a proxy for comparing the average effectiveness of types of interventions only with ‘all other things equal’ in distribution.

So, when you are asked to conclude that one intervention is more effective than another because one study resulted in a larger effect size, check if ‘all other things equal’ holds (equal control treatment, equal spread of sample, equal measure and so on). If not, you should not draw the conclusion.

When the Teaching and Learning Toolkit invites you to draw the conclusion that the average intervention in one area is more effective than the average intervention in another because its average effect size is larger, check if ‘all other things equal’ holds for distributions of controls, samples and measures. If not, you should not draw the conclusion.

Don’t rely on disputatious dons: dig in to the detail of the studies and the meta-analyses. Does ‘feedback’ use proximal measures in the same proportion as ‘behavioural interventions’? Does ‘phonics’ use restricted ranges in the same proportion as ‘digital technologies’? Does ‘metacognition’ use the same measures as ‘parental engagement’? Is it true that the toolkit relies on ‘robust and appropriate comparison groups’, and would that anyway be enough to confirm the ‘all other things equal’ assumption?

KHV describe my work as ‘bad news’ because it destroys the magic of effect size. ‘Bad news’ may be a badge of honour to wear with the same ironic pride as decent journalists wear autocrats’ ‘fake news’ labels. However, I agree it can feel a little cruel to wipe away the enchantment of a magic show; one may think to oneself ‘isn’t it kinder to let them go on believing this is real, just for a little longer?’ However, educational policy making may be one of those times when we have to choose between rhetoric and reason, or between magic and reality. Check the arguments for yourself and make your own choice: are effect sizes a magical beginning of an evidence adventure, or a category error misdirecting teachers’ effort and resources?
  
References

Kay, J., Higgins, S. & Vaughan, T. (2018) The magic of meta-analysis, http://evidenceforlearning.org.au/news/the-magic-of-meta-analysis/ (accessed 28/5/2018)

Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 32(4), 450-466.


Lovell, O. (2018a) ERRR #017. Adrian Simpson critiquing the meta-analysis, Education Research Reading Room Podcast, http://www.ollielovell.com/errr/adriansimpson/ (accessed 25/5/2018)

Evidence-informed practice and the dentist's waiting room

Sometimes the inspiration for a blogpost comes from an unexpected place, in this instance, my dentist’s waiting room.   Now I happen to be a regular visitor to my dentist because  back in 2005 I had a ‘myocardial infarction’ - better known as a heart-attack. Given that at the time I appeared to be fit, active and had completed many triathlons,  my heart-attack was ‘perplexing’ both for me and the medical professionals providing my treatment.  However, to cut a very long-story short, a contributory factor to my heart-attack appeared to be that  I had a bad-case of gum-disease and which research evidence suggests is  related to an increased risk of heart-disease,   Dhadse, P., Gattani, D., & Mishra, R. (2010).  And which is why I was in my dentist's waiting room about to  have my both teeth cleaned and gums ‘gouged’.

Now you may be asking, what on earth has an ‘evidence-based’  trip to do a dentist have to do with evidence-based or,  if you prefer, evidence-informed practice within schools.  Well it just so happened that whilst in the dentist’s waiting room I was reading Hans Rosling’s recently published book: Factfulness: Ten reason we’re wrong about the world – and why things are better than you think, when I came across this  paragraph about mistrust, fear and the inability to ‘hear data-driven arguments.

 In a devastating of example critical thinking gone bad, highly educated, deeply caring parents avoid the vaccinations that would protect their children from killer diseases.  I love critical thinking and I admire scepticism, but only in a framework that respects evidence.  So if you are sceptical about the measles vaccinations, I ask you to do two things.  First, make sure you know what it looks like when a child dies of measles.  Most children who catch measles recover, but there is still no cure and even with the best modern medicine, one or two in every thousand will die from it.  Second, ask yourself, “What kind of evidence would convince me change my mind about vaccination.  If the answer is ‘no evidence could ever change my mind about vaccination,” then you are putting yourself outside evidence-based rationality, outside the very critical thinking that first brought you to this point.  In that case, to be consistent in your scepticism about science, next time you have an operation please ask your surgeon not to bother washing her hands.  (p117).

So what are the implications Rosling et al’s critique of critical thinking gone wrong  for your role a school leader wishing to promote the use of evidence within your school.   At first glance, it seems to me that there are three implications.

First, ask yourself the question for about an issue which have pretty strong views – be it mixed-ability teaching, grammar schools and the 11 plus, or progressive vs traditional education – “What evidence would it take to change your mind?”  This is important as a critical element of being a conscientious evidence-informed practitioner is to actively seek alternative perspectives.  And if you are not at least willing to be persuaded by those perspectives, there is little point seeking  them out in the first place

Second, when working with colleagues who may ‘reject’ evidence-informed practice – ask them the same question “What evidence would it take to change your mind?”.  If they respond “there is no evidence that would get me to change my mind” ask them the following question: “Ok, is there a teaching approach you particularly favour, and if so, why?” and then ask the follow-question – “Tell me more.”

Third, there may be occasions when working with colleagues are resistant to evidence-informed practice that you have to resort to a variant of the  ‘surgeon with dirty hands’ argument, so ask the following: “Would you like your own children or children of family members to be taught by a teacher or teachers who: 
  • Do not have a deep knowledge and understanding of the subjects they teach
  • Have little or no understanding about how pupils’ think about the subject they are teaching
  • Are not very good at asking questions
  • Do not review previous learning
  • Fail to provide model answers
  • Give adequate time for practice for pupils to embed their skills
  • Introduce topics in a random manner
  • Have poor relationships with their pupils
  • Have low expectations of their pupils
  • Do not value effort and resilience
  • Cannot manage pupil behaviour
  • Do not have clear rules and expectations
  • Makes inefficient and ineffective use of time in lessons
  • Are not very clear in what they are trying to achieve with pupils
  • Haven’t really thought about how learning happens and develops or how teaching can contribute to it.
  • Give little or no time to reflecting on their professional practice
  • Provide little or no support for colleagues
  • Are not interested in liaising with pupils’ parents
  • Do not engage in professional development  (amended from Coe, Aloisi, et al. (2014)

And if they answer No – we would not want my children or family members taught by such teachers – then you might respond by saying “You might not believe in evidence-informed practice though you would appear to agree with the evidence on ineffective teaching.”

And finally

Working with colleagues who have different views to you on the role of evidence-informed practice is inevitable.  What matters is not that you have different views but rather how do you about finding the areas you can agree on, which then gives you something to work on in future conversations

References

Coe, R., Aloisi, C., Higgins, S. and Major, L. E. (2014). What Makes Great Teaching? Review of the Underpinning Research. London.
Dhadse, Prasad, Deepti Gattani, and Rohit Mishra. “The Link between Periodontal Disease and Cardiovascular Disease: How Far We Have Come in Last Two Decades ?” Journal of Indian Society of Periodontology 14.3 (2010): 148–154. PMC. Web. 29 May 2018 

Rosling, H, Rosling, O., and Rosling Ronnlund, A. (2018).  Factfulness: Ten reason we’re wrong about the world – and why things are better than you think, London: Sceptre

Guest Post : Unleashing Great Teaching by David Weston and Bridget Clay


This week's post is a contribution from David Weston and Bridget Clay who are the authors of Unleashing Great Teaching: the secrets to the most effective teacher development, published May 2018 by Routledge. David (@informed_edu) is CEO of the Teacher Development Trust and former Chair of the Department for Education (England) CPD Expert Group. Bridget (@bridge89ec) is Head of Programme for Leading Together at Teach First and formerly Director of School Programmes at the Teacher Development Trust.


Unleashing Great Teaching 

What if we were to put as much effort into developing teachers as we did into developing students? How do we find a way to put the collective expertise of our profession at every teacher’s fingertips? Why can’t we make every school a place where teachers thrive and students succeed? Well, we can, and we wrote Unleashing Great Teaching: the secrets to the most effective teacher development to try and share what we’ve discovered in five years of working with schools to make it happen.  Quality professional learning needs quality ideas underpinning it. But, by default, we are anything but logical in the way that we select the ideas that we use. A number of psychological biases and challenges cause us to reject the unfamiliar.

We all have existing mental models which we use to explain and predict. To challenge one of these models implies that much of what we have thought and done will have been wrong. We all need to guard against this in case it leads us to reject new ideas and approaches. This is nothing new.
In 1846 a young doctor, Ignaz Semmelweis, suspected that the cause of 16% infant mortality in one clinic might be the failure of doctors to wash their hands. When he ran an experiment and insisted that doctors wash hands between each patient, the deaths from fever plummeted. However, his finding ran so against the established practice and norms that his findings were not only rejected but widely mocked despite being obviously valid. This reactionary short-sightedness gave rise to the term The Semmelweis Reflex: ‘the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs or paradigms.’

An idea that contracts what we already think, which comes from a source that we don’t feel aligned to, or which makes us feel uneasy, is highly likely to be rejected for a whole range or reasons , even if there is a huge amount of evidence that it is far better than our current approach.

Reasons for rejection

Confirmation bias is really a description of how our brains work. When we encounter new ideas, we can only make sense of them based on what’s already in our heads, adding or amending existing thinking. This means that anything we encounter that is totally unfamiliar is less likely to stick than something partially familiar. Similarly, an idea that is mostly aligned with our existing thinking is more likely to stick than something completely at odds – the latter is a bit like a weirdly-shaped puzzle-piece: it’s very hard to find a place for it to go.  The effect of all of this is that when we hear an explanation, we remember the ideas that confirm or support our existing thinking and tend to reject or forget the ideas that don’t.

But it’s not just the nature of the ideas that affect our ability to learn. If an existing idea is associated with the memory of lots of effort and hard work, it becomes harder to change. This sunk cost bias means that we excessively value things we’ve worked hard on, no matter whether they’re actually very good or not. This bias is also known as the Ikea Effect – everyone is rather more proud of their flatpack furniture than this cheap and ubiquitous item perhaps deserves, owing to the effort (and anger!) that went into its construction.

We also see a number of social effects that mean that we don’t just listen to other people’s ideas in a neutral way. The Halo Effect is the way we tend to want to believe ideas from people we like and discount ideas from people we don’t. Of course, none of that bears any relation to whether the ideas are good. Public speakers smile a lot and make us laugh in order to make the audience feel good and thus become more likely to believe them. Two politicians of different parties can suggest the exact same idea, but supporters of the red party are much more likely to hate the idea if they hear it from the blue politician, and vice versa. A teacher from a very different type of school is much less likely to be believed than someone you can relate to more – though of course none of this necessarily affects whether their ideas are good.

If someone does present an idea that conflicts with our current thinking and beliefs, they run the risk of Fundamental Attribution Error. When we come into conflict with others, we rush to assume that the other person is of bad character. Any driver who cuts you up is assumed to be a terrible driver and a selfish person, but if you cut someone else up and they hoot then you generally get annoyed with them for not letting you in. A speaker or teacher who tells you something you don’t like is easily dismissed as ignorant, annoying or patronising.

Using evidence to support professional learning 

So how do we ensure that we’re using quality ideas to underpin professional learning? In our book we lay out some tools to help you overcome your inevitable biases.

Firstly, it’s very useful to look out for systematic reviews. These are research papers where academics have carefully scoured all of the world’s literature for anything relevant to a topic, then categorised it by type, size and quality of study, putting more weight on findings from bigger, higher quality studies and less on smaller, poorly-executed research. They bring all of the ideas together, summarising what we appear to know with confidence, what is more tentative, and where there are areas where the evidence is conflicting or simply lacking.

If you are interested in a topic, such as ‘behaviour management’ or ‘reading instruction’ then it’s a really good idea to tap it into a search engine and add the words ‘systematic review’. Look for any reviews conducted in this area to get a much more balanced view of what is known.

Secondly, raise a big red flag when you can feel yourself getting excited and enthusiastic about an idea. That’s your cue to be extra careful about confirmation bias and to actively seek out opposing views. It’s a very helpful idea to take any new idea and tap it into a search engine with the word ‘criticism’ after – e.g. ‘reading recovery criticism’ or ‘knowledge curriculum criticism’.

Thirdly, be a little more cautious when people cite lists of single studies to prove their point. You don’t know what studies they’ve left out or why they’ve only chosen these. Perhaps there are lots of other studies with a different conclusion – only a good systematic review can find this out.

Finally, be cautious of the single study enthusiasm, where newspapers or blogger get over-excited about one single new study which they claim changes everything. It may well be confirmation bias – or indeed if they are criticising it then it could also be confirmation bias causing them to do so.

To conclude

Of course, good quality ideas are only one ingredient. In our book we also explore the design of the professional learning process, offer a new framework to think about the outcomes you need in order to assist in evaluation, and discuss the leadership, culture and processes needed to bring the whole thing together. There are many moving parts, but if schools can pay the same attention to teachers’ learning as they do to students’ learning, we can truly transform our schools and unleash the best in teachers.


.

The school research champion and the evidence-rich school

Teachers, middle and senior leaders interested in bringing about greater use of evidence within their schools are exposed to a wide-range of terminology.  As such,  teachers and school leaders interested in evidence have to be able to distinguish, or at least be aware of the possible differences between: research-based practice; research-informed; evidence-based practice; and, evidence-informed practice.  And now  there a ‘new-kid on the block’ – evidence-rich/enriched practice.  So in this post I am going to look at:  what evidence-rich/enriched practice could mean; research into evidence-enriched practice looks like in a health-care setting; and, consider the implications of preceding discussion for those in interested in the use of evidence within schools. 

Evidence-enriched practice

Stoll (2017) describes evidence enriched practice as involving teachers and school leaders using external research evidence; collecting and analysing data; and, engaging in collaborative enquiry/research and development.  With teachers and school leaders being very much in the driving seat in the use of evidence.

Reflecting on this definition a number of issues need to be considered.

First, existing definitions of evidence-based practice, such as, Barends, Rousseau, et al. (2014), already make great play of different sources of evidence, be it research evidence, organisational data, stake-holder views and practitioner expertise, and if done properly, will be evidence-rich.

Second, definitions of evidence-based medicine, such as Sackett, Rosenberg, et al. (1996) emphasise the role of patients in making decisions.  Indeed, evidence-based medicine is about patients and clinicians making informed decisions about patient care, which are informed by the patients values and preferences.  Stoll’s definition is largely silent on the role of pupils and stakeholders in the decision-making process.

Third, the use of the ‘driving seat’ metaphor is quite interesting, in the driving seat of what: an evidence-informed pedal-powered go-kart or an evidence-based F1 racing car. 

Four, evidence-based practice is about making decisions on the basis of the best available evidence, which for me, is not the same as engaging in collaborative research and development.  R&D may subsequently be used in future evidence-based decisions, but it is a separate process. 

Five, despite the above criticism of Stoll’s notion of evidence-enriched practice, I welcome the emphasis on the collaborative nature of evidence-based practice, which has particular implications for school leadership: see Jones (2018 Forthcoming).

Evidence-enriched practice: lessons from health and social care sector

Regular readers of this blog will be aware that I often argue that there is much to learn from medicine and health-care about evidence-based practice.  Accordingly, it seems sensible to see what research has been published in the medicine and health-care sectors on evidence-enriched practice.  To do this I conducted a search on Google Scholar using the term ‘evidence-enriched practice’ I came across this paper : Developing Evidence Enriched Practice in Health and Social Care with Older People Andrews, Gabbay, et al. (2015).  This is a fascinating paper, which I will explore in more detail in future posts, however for the purposes of this blog I’m just going to highlight the various elements and sub-elements of evidence-enriched practice which were woven and interwoven into the project.

Element 1: Valuing and using a range of evidence

  • research evidence
  • practitioner knowledge and experiences
  • the voice of older people and carers
  • organisational knowledge (policy imperatives, embedded systems and resources).

Element 2: Securing senior management buy-in and valuing and empowering participants


  • Appreciation and respect: valuing people and focusing on their strengths and the things that matter to them
  • Honesty: supporting people to ‘say it as it is’
  • Permission: encouraging people to be creatively humane, not just procedurally compliant
  • Mutual trust: developed through respectful conversations
  • Celebration: recognising and building on success, including the importance of ‘ordinary’, often little, things

Element 3: Capturing and presenting relevant evidence in accessible and engaging formats

  • Stories, quotes, pictures, music and poetry
  • Good practice from elsewhere
  • Normative frameworks
  • Provocative statements

Element 4: Facilitating the exploration and purposeful use of evidence

  • A simple approach to support dialogic learning using evidence as the stimulus
  • Working as a community of practice
  • Facilitating serendipity and weaving in evidence as the project developed

Element 5: Recognising and addressing national and local organisational circumstances and obstacles

  • National social policy and financial investment in social care services
  • National regulatory requirements and local policies and procedures
  • Managing relational risk
  • Managing risks to physical safety
  • Developing and using recording that enhances the provision of good care and support and quality assurance
  • Local organisational management culture
  • The problem of feeling ‘left out’

What should be immediately obvious is that in comparison to Stoll (2017) this is a far more comprehensive framework with which to describe an evidence-enriched environment.  In particular, it emphasises the role of senior leadership in creating the environment in which an evidence-enriched practice can flourish.  It also recognises the need to address national and local circumstances, and not to see them as a hindrance but as something which is an integral part of the ‘evidence environment’.  Finally, the role of older people and carers is fully acknowledged.  


What are the implications for those interested in the creation of evidence-enriched practice with schools?

First, education does not need to reinvent the ‘evidence-enriched  wheel’ as there is much to learn from other sectors.  That does not mean it will not have to be adapted but it does mean we can ‘stand on the shoulders of others.’

Second, school leaders who think they will automatically build an evidence-enriched school culture by appointing a school research lead/champion need to think again.  School leaders need to give real consideration as to whether the leadership and management culture and style of the school is consistent with the conditions necessary to create an evidence-enriched environment.  If it isn’t but want to do something about it, the starting point is your own conduct as a school leader. If you are not interested in deeply reflecting upon your own leadership practice, then you may be better off not trying to become evidence-enriched.

Third, ‘evidence-enriched’ teachers are part of a community of practice.  It’s not about individual teachers conducting teacher-led randomised controlled trials – it’s about deep and profound conversations with colleagues, pupils, parents and other stakeholders based upon a culture or mutual respect.

Fourth, currently much of the research into evidence-informed practice focuses on how teachers and school leaders use research-evidence.  This is a far too narrow a focus and greater emphasis should be place on investigating how teachers and school leaders go about aggregating multiple sources of evidence and incorporating that evidence into the decision-making process. 

Fifth, knowledge brokers – be it research schools or  the individual school research champion  - need to consider different ways knowledge can be shared.  Newsletters are a very basic and safe way of sharing information – though probably not that effective - and we need to find far more of communicating ideas in accessible and interesting formats.

And finally

If you are interested in finding out more about what evidence-rich and evidence-enriched may look like in practice, the RSA will later this year be publishing a report Learning About Culture  which looks at what works in cultural learning, and how  to support schools and cultural organisations to use evidence from their own work and elsewhere to continuously improve their practice. Indeed, one of the intended key outcomes of the work is something the RSA describes as evidence-rich practice. 


References

Andrews, N., Gabbay, J., Le May, A., Miller, E., O'Neill, M. and Petch, A. (2015). Developing Evidence Enriched Practice in Health and Social Care with Older People.
Barends, E., Rousseau, D. and Briner, R. (2014). Evidence-Based Management : The Basic Principles. Amsterdam. Center for Evidence-Based Management
Bath, N. (2018). Exploring What It Means to Be ‘Evidence-Rich’ in Practice. IOE London Blog. https://ioelondonblog.wordpress.com/2018/04/12/exploring-what-it-means-to-be-evidence-rich-in-practice/.
Jones, G. (2018 Forthcoming). Evidence-Based School Leadership: A Practical Guide. London. SAGE Publishing.
Sackett, D., Rosenberg, W., Gray, J., Haynes, R. and Richardson, W. (1996). Evidence Based Medicine: What It Is and What It Isn't. Bmj. 312. 7023. 71-72.
Stoll, L. (2017). Five Challenges in Moving Towards Evidence-Informed Practice. Impact. Interim issue. Interim issue.
Straus, S., Glasziou, P., Richardson, S. and Haynes, B. (2011). Evidence-Based Medicine: How to Practice and Teach It. (Fourth Edition). Edinburgh. Churchill Livingstone: Elsevier.