Supporting teachers to be evidence-based practitioners - what do we know?

If you have any kind of interest in the development of evidence-based/informed practice (EBP) within schools, then this blogpost is for you.

Even with the worldwide interest in evidence-based practice (EBP) as a core concept within medicine and healthcare, the evidence on how best to teach evidence-based practice is weak.  In a recently published systematic review -  Albarqouni, Hoffmann, et al. (2018) – found that most  EBP educational interventions evaluated in controlled studies tended to focus on the critical appraisal of research evidence and did not use high quality instruments to measure the outcomes.

With this in mind, the rest of this post will examine the implications of the findings of the review for schools as they attempt to provide support and training to teachers in becoming better evidence-based practitioners.

What are the implications for evidence-based practice educational interventions within schools and other educational settings?

First, whereas in medicine there is a general understanding as to what is meant by evidence-based practice -  – Sackett, Rosenberg, et al. (1996) -  this is not the case in education.  As Nelson and Campbell (2017) argue: there is a little agreement over the  precise meaning of the term, in large part because of a lack of consensus as to whether; ‘research’ and ‘evidence’ are one and the same, for example? (Nelson 2014); are ‘evidence-based’ and

‘evidence-informed’ practices fundamentally different? (McFarlane 2015); and, perhaps the most intensely debated, ‘Whose evidence counts?’’  That said, as Professor Rob Coe stated at the February 2017 launch of event of the Chartered College of Teaching – agreeing a definition of evidence-based practice/ evidence-informed practice – should be possible.

Second, in medicine there would appear to be agreement about the five steps associated with being an evidence-based practitioner - Dawes, Summerskill, et al. (2005). These five steps include: translation of uncertainty into an answerable question;  systematic retrieval of best evidence available; critical appraisal of evidence for validity, clinical relevance, and applicability; application of results in practice; and, evaluation of performance.  On the other hand in education, at most there is agreement in evidence-informed practice involves multiple sources of evidence and the deployment of professional judgement, Nelson and Campbell (2017).

Third, given the nature of education there are going to real challenges for advocates of evidence-based practice within education to demonstrate impact on pupils outcomes.  As such, it make some sense to try and come up with validated instruments which can be used to measure teachers and knowledge, skills and attitudes towards EBP. The CREATE framework - Tilson, Kaplan, et al. (2011) – provides guidance on both the assessment domains and types of assessment.  This framework could easily be amended for use in an educational context as illustrated in Table 1 ( based on Tilson, Kaplan et als)

Screen Shot 2018-11-09 at 14.17.28.png

Fourth, given time, effort and money being put into EBP educational interventions – not just in IEE/EEF Research Schools – but in an increasing number of schools within England and across the world, perhaps attention should be given to developing guidelines on the reporting of EBP educational interventions, just as has been done in medicine,- GREET -  Phillips, Lewis, et al. (2016).   This is especially important as we know relatively little about the effective implementation of EBP educational interventions.  If studies under-report the details of the intervention – this will make it extremely difficult to bring together: what has been learnt; how to make the most of successes; and, avoiding unnecessary failures.

Fifth, my own experience of EBP educational interventions would suggest that there is a great deal of emphasis on both accessing and interpreting research evidence, with insufficient attention being given to the challenging process of assessing and aggregating differing sources of evidence – be it practitioner expertise, stakeholder views and school data.

And finally

I’ve always been a believer in success is a case of doing simple things well – or as Woody Allen says ‘eighty percent of success is showing up’.  Maybe in education we are not doing the doing the simple things well – which is making the most of what has been learnt in other disciplines.

Abstract - Albarqouni, L., Hoffmann, T. and Glasziou, P. (2018). Evidence-Based Practice Educational Intervention Studies: A Systematic Review of What Is Taught and How It Is Measured. BMC medical education. 18. 1. 177.

Background: Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions.

 Methods: We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of

the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used.

Results: Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. 

Conclusions: Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching.

References

Albarqouni, L., Hoffmann, T. and Glasziou, P. (2018). Evidence-Based Practice Educational Intervention Studies: A Systematic Review of What Is Taught and How It Is Measured. BMC medical education. 18. 1. 177.

Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayian, K., Porzsolt, F., Burls, A. and Osborne, J. (2005). Sicily Statement on Evidence-Based Practice. BMC medical education. 5. 1. 1.

Nelson, J. and Campbell, C. (2017). Evidence-Informed Practice in Education: Meanings and Applications. Educational researcher. 59. 2. 127-135.

Phillips, A. C., Lewis, L. K., McEvoy, M. P., Galipeau, J., Glasziou, P., Moher, D., Tilson, J. K. and Williams, M. T. (2016). Development and Validation of the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (Greet). BMC medical education. 16. 1. 237.

Sackett, D., Rosenberg, W., Gray, J., Haynes, R. and Richardson, W. (1996). Evidence Based Medicine: What It Is and What It Isn't. Bmj. 312. 7023. 71-72.

Tilson, J. K., Kaplan, S. L., Harris, J. L., Hutchinson, A., Ilic, D., Niederman, R., Potomkova, J. and Zwolsman, S. E. (2011). Sicily Statement on Classification and Development of Evidence-Based Practice Learning Assessment Tools. BMC medical education. 11. 1. 78.

 

Why instructional coaching may not be the answer for everyone.

Instructional coaching blog

Thursday 15 November sees the Teacher Development Trust hold a one day conference on coaching and how it could/can be used to drive school performance.  Unfortunately, as I am not able to attend the conference I thought I’d better use some time  to do a bit of reading around the subject of coaching and instructional coaching.  This seemed particularly sensible as @DrSamSims has  recently described instructional coaching as the best evidenced form of CPD.  Subsequently,  I stumbled across the work of  Jacobs, Boardman, et al. (2018) who undertook a research investigation to understand teacher resistance to instructional coaching.   As such the rest of this post will:

·      Offer a definition of instructional coaching

·      Provide the abstract of Jacobs et al’s research.

·      Undertake a review of the research using the 6 A’s framework (see https://www.garyrjones.com/blog/2018/10/10/how-can-we-trust-research or http://www.cem.org/blog/how-can-we-trust-research/ )

·      Consider the implications schools wishing to support colleagues’ professional learning and development. 

Definition – Instructional Coaching

Put simply – instructional coaching involves a trained expert working  - be it an external coach, leader teacher or peer -  with teachers individually, to help them learn and adopt new teaching practices, and to provide feedback on performance.  This is done with the intent to both support accurate and continued implementation of new teaching approaches and reduce the sense of isolation teachers can feel when implementing new ideas and practices. 

Abstract

Research provides strong support for the promise of coaching, or job embedded professional development, particularly on improving teachers’ classroom instruction. As part of a comprehensive professional development model, 71 middle school (grades 6–8) science, social studies, and language arts teachers were assigned to an instructional coach to support their required use of a multicomponent reading comprehension approach, Collaborative Strategic Reading. In this study, we sought to better understand the factors that influence responsiveness to coaching, focusing in particular on teachers who appeared the least receptive to collaborating with a coach to support the implementation of a new practice. Results highlight the patterns and complexities of the coaching process for 20% of the teachers in our sample who were categorized as resistant to coaching, suggesting that the one-on-one model of coaching offered in this study may not be the best fit for all teachers.

 

Where's the evidence for evidence-based practice improving pupil outcomes?

 A few weeks ago in an online  discussion with Dr David James – Deputy Head Academic at Bryanston School –  David posed the  following question: Where is the evidence that evidence-based practice has a measurable impact on learning and outcomes? In other words, which schools can point to exam results and say they have improved because of evidence-informed practice? In other words, where is the backing for the claim that schools and teachers should use evidence – and particularly research evidence – to inform practice?  As otherwise, all we have is the assertion that the use of evidence is a good thing.

Unfortunately, at the moment, there is relatively little, if any, evidence that the use of research evidence by teachers will improve pupil outcomes,  Rose, Thomas, et al. (2017).  However, this may change with the forthcoming EEF evaluation of the RISE project, which was run out of Huntington School and which is due to be published in early 2019.  Indeed, where evidence is available about the outcome of teacher use of research evidence it relates to the positive impact it has on teachers,  Cordingley (2015) and  Supovitz (2015).  So it is within this context, that I read with an interest a recently published systematic review - Simons, Zurynski, et al. (2018) – on whether evidence-based medicine training improve doctors knowledge, practice and patient outcomes, which concludes: EBM training can improve short-term knowledge and skills among medical practitioners, although the evidence supporting this claim comes from relatively poor-quality evidence with a significant risk of bias. There is even less evidence supporting the claim that EBM training results in changes in clinicians’ attitudes and behavior, and no evidence to suggest that EBM training results in improved patient outcomes or experiences. There is a need to improve the quality of studies investigating the effectiveness of EBM training programs. (p5)

 Now, if you are an advocate of evidence-based education this may appear to be quite depressing.  If medicine, where evidence-based practice first originated, has not been able to provide evidence-based medicine training to doctors which improves patient outcomes – then what chance do we have in education of being able to train teachers and leaders to use evidence-based practice to improve pupil outcomes?  Well my own view is that we may be able to learn lessons from evidence-based medicine, which will then help us create the conditions for success within education.  That does not mean this is a given, we need to learn the right lessons, adapt them in the right way for education and then implement them in such a way which allows significant adaptation within a local context.  So to start this process, I am going to take the practice points identified by Simons, et al. (2018) and comment on their potential applicability within education and their implications they may have different ‘players’ within education based education eco-system.

The Practice Points from the systematic review

The EBM practice landscape is changing with more emphasis on patient participation, including shared decision-making.

Most doctors benefit from EBM training that is integrated into their clinical practice, and where institutional support is evident.

Whilst EBM courses for doctors demonstrate short-term improvements in knowledge, there is no strong evidence linking EBM training to changes in clinical practice or patient outcomes.

It is important to investigate whether EBM training leads to improvements in doctors’ practice behaviors that may also facilitate changes in patient outcomes and experiences.

It may be possible to use reliable measures of clinical practice and patient experiences to evaluate EBM training, such as structured practice portfolios, patient experience surveys and multi source feedback. (p1)

Implications and discussion

First, given the challenges that medicine appears to be having in getting training for evidence-based medicine to work with doctors – then maybe we should not be too surprised if our first efforts to provide training for teachers in evidence-based practice do not lead to improvements in pupil outcomes.  This in turn may require us, in the short-term to reduce our expectations about the what the training of teachers  and leaders to use evidence-based practice can achieve. 

Second, given the changes in the evidence-based medicine landscape and the increased focus on patient participation and informed decision-making – all those involved in evidence-based practice within schools, may need to give consideration to the role of pupils, teachers, parents and other stakeholders in evidence-based decision-making.

Third, training designed to support the use of evidence-based practice within schools will need to be sustained.  It’s highly unlikely that training provided in ITT or professional learning is going to ‘deliver’ evidence-based practice within schools.  Rather it is going to require an ongoing and sustained effort and cannot be just a short-term fad or this year’s priority.  This is particularly important when considering the both impact of the EEF/IEE Research Schools programme and its future development, as it maybe the underpinning model needs radical re-modelling. 

Four, if you are school leader and want to encourage evidence-based practice within your school – then you need to make sure sufficient support is in place to build the capacity, motivation and opportunities necessary for evidence-based practice,  Langer, Tripnet, et al. (2016). 

Five, given that EEF evaluations of interventions include both process and impact evaluations it may be that medicine has much to learn from education about evaluations using multiple sources of evidence.  On the other hand, Connolly, Keenan, et al. (2018) report that over 60% of randomised controlled trails within education tended to ignore both the context within which the intervention took place and the experience of participants.

And finally

It’s important to remember that when trying to evaluate the impact of any intervention on examination results that around 97% of the variation in performance between year groups can be explained by changes in the cohort and how well do individuals do ‘on the day,’ Crawford and Benton (2017).  So the impact of examination results from teachers being trained in evidence-based practice is likely to be relatively small.  Indeed, it’s not enough to look at one year’s examination results, results will be need to reviewed and evaluated over a number of years.

References

Connolly, P., Keenan, C. and Urbanska, K. (2018). The Trials of Evidence-Based Practice in Education: A Systematic Review of Randomised Controlled Trials in Education Research 1980–2016. Educational Research.

Cordingley, P. (2015). The Contribution of Research to Teachers’ Professional Learning and Development. Oxford Review of Education. 41. 2. 234-252.

Crawford, C. and Benton, T. (2017). Volatility Happens: Understanding Variation in Schools’ Gcse Results : Cambridge Assessment Research Report. Cambridge, UK.

Langer, L., Tripnet, J. and Gough, D. (2016). The Science of Using Science : Researching the Use of Research Evidence in Decision-Making. London. EPPI Centre, S. S. R. U., UCL Insitute of Educations, University College of London.

Rose, J., Thomas, S., Zhang, L., Edwards, A., Augero, A. and Roney, P. (2017). Research Learning Communities : Evaluation Report and Executive Summary December 2017. London.

Simons, M. R., Zurynski, Y., Cullis, J., Morgan, M. K. and Davidson, A. S. (2018). Does Evidence-Based Medicine Training Improve Doctors’ Knowledge, Practice and Patient Outcomes? A Systematic Review of the Evidence. Medical teacher. 1-7.

Supovitz, J. (2015). Teacher Data Use for Improving Teaching and Learning. In Brown, C.  Leading the Use of Research & Evidence in Schools.  London. Bloomsbury Press.

When calls for silence lead to shouting

Over the last few days my Twitter timeline has been inundated with Tweets about the rights and wrongs of pupils being silent in corridors. Now one of the problems with Twitter and Tweets  is that they often do not provide subtlety and nuance – and the Tweets about ‘silence’ have ironically led to many of what can only be described ‘shouty’ Tweets.  So this post – which will not take sides on the issue of ‘silence’ - will look at the method developed by philosopher Stephen Toulmin for analysing arguments.  Indeed, potentially Toulmin’s method works extremely well where there are no clear truths or absolute solutions or a problem, so is likely to work well as a structure for analysing the arguments for and against ‘silence’.  The rest of this post will seek to provide:

·      An outline of Toulmin’s structure of an argument;

·      An application of Toulmin’s structure to ‘silence’ between classrooms;

·      A discussion around the use of Toulmin’s structure within schools.

Toulmin’s structure of an argument

  • The claim (C) or conclusion i.e. the proposition or statement of opinion that that the author is asking to be accepted

  • The facts or grounds (G) we appeal as the basis for C, also called data i.e. in other words, the specific facts relied on to support a claim

  • The warrant (W) – what links the grounds to the claim - which is the general rule that allows us to infer a claim and gives us permission to go from G to C

  • Behind our warrant will be backing (B) – which is the body of experience and evidence that supports the warrant

  • The qualifier (Q)  – which is a word or phrase which indicates the strength conferred on the inference from the grounds to the claim – in other words, the strength of the support for the claim.

  • Rebuttals (R) – these are extraordinary or exceptional circumstances that would undermine the supporting grounds

Example Teachers should make greater use of research evidence

·      Claim, Teachers should make greater use of research evidence of ‘what works’ when planning teaching and learning

·      Grounds, Teachers make little use of research evidence of ‘what works’ when planning teaching and learning - recent research states that only 23% of teachers use the EEF’s Teaching and Learning Toolkit https://www.suttontrust.com/research-paper/best-in-class-2018-research/

·      Warrant, Some teaching strategies and techniques bring about greater increases in learning than other teaching strategies

·      Backing, The best available evidence from systematic reviews, meta-analyses and meta-meta-analyses.

·      Qualifier, Presumably teachers will have the skills and knowledge to use the research backed strategies. 

·      Rebuttal, However, not all students are alike, some  students may not benefit from the approach.  In addition, the resources  needed for successful implementation are not always available?

Silence between classrooms

So let’s try and use this structure to help us construct and understand the arguments for and against ‘silence’ between classrooms.

Silence

  • Claim - Pupils should move silently between classrooms

  • Grounds - Many pupils are bullied when moving between classrooms

  • Warrant - Pupils have a right to move between classrooms without being bullied

  • Backing - Personal experience

  • Qualifier - Presumably

  • Rebuttal - There may be occasions where it is appropriate for pupils to being talking when moving between classrooms

Non-silence

  • Claims - Pupils should have the opportunity to speak to one another when moving between classrooms

  • Grounds - The vast majority of pupils behave appropriately when moving between classrooms

  • Warrant - We need to demonstrate to  pupils that we trust them to behave in an appropriate manner

  • Backing - Personal experience

  • Qualifier - Presumably

  • Rebuttal - There may be occasions where it is appropriate for pupils not to talk when moving between classrooms  

Now I need to stress two things.  First, these are not the only arguments for and against ‘silent’ movement between classrooms – but rather should be seen as attempts to show how Toulmin’s structure could work for both sides of an argument.  Second, the examples create the impression that there is a binary divide between for an against ‘silent movement’ – that is not the intent.

Implications of using Toulmin’s structure for analysing arguments

  • It’s worth spending some time understanding the Toulminian structure of arguments as it will help you articulate your own arguments more clearly.

  • Using the Toulmin’s structure will make it easier for your you to display the first of Rapoport’s rules for disagreeing i.e. attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.” Dennett (2013)

  • This is not the only way to think about the structure of arguments - see Cartwright and Hardie (2012).

So where does this leave us when it comes to ‘silence between classrooms’

I must admit when I began writing this post, the very process of going through the Toulminian structure made me ‘think’ and then  ‘think’ again.  In particular, it made me realise how many discussions on Twitter don’t involve arguments but rather they are about competing claims, which form only a small part of an argument  At the very least an argument requires – grounds, a claim and evidence – whereas from what I see most Tweets or series of Tweets is that they make little or no reference to the ‘grounds/evidence/data’ on which the claim is based.   That said, since I started writing this post I came across a blog from @ClareSealy https://primarytimery.com/2018/10/23/corridors/amp/?__twitter_impression=true which clearly articulates the grounds/evidence/data  for ‘silent movement’ between classrooms.  On the other hand, you may wish to have a look at https://www.telegraph.co.uk/news/2018/08/23/school-banned-talking-corridors-sees-10-per-cent-increase-results/ or https://suecowley.wordpress.com for the alternative view.

And finally

If you are interested in using the Toulminian structure, I suggest that you have a look at either  Kvernbekk (2013) or (2016).  Alternatively you wish to have a read of  at Jenicek and Hitchcock (2005). In addition, there are plenty of ‘Toulmin’ resources available on the Internet – although as always – not all of the material is of the same quality.

References

Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.

Dennett, D. (2013). Intuition Pumps and Other Tools for Thinking. London. Allen Lane.

Jenicek, M. and Hitchcock, D. (2005). Evidence-Based Practice: Logic and Critical Thinking in Medicine. United States of America. American Medical Association Press

Kvernbekk, T. (2013). Evidence-Based Practice: On the Function of Evidence in Practical Reasoning. Studier i Pædagogisk Filosofi. 2. 2. 19-33.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

PS This post was amended on Thursday 25 October when I deleted the name of the school whose approach to ‘silent corridors’ set off the Twitterstorm.  

 

 

 

Performance Management - what does the evidence say?

This week has seen the TES publish two articles on performance management.  One article was by Joe Baron – a pseudonym for a teacher of history – who felt he had been a victim of performance management and was being held accountable for results beyond his control.  The second article was by Rebecca Foster – identifying five ways in which performance management could be improved  and which include: be clear about the goal; make sure it’s a process ‘done with’; think carefully about how targets can be met; be clear about when things should happen; and, don’t set meaningless targets.  As such, there is little doubt that performance management can be both distressing for appraisees and difficult for appraisers to get right.  Indeed, given some of these difficulties there are an increasing number of reports that in the world of business that the annual performance management cycle is being abandoned by many  organisations, including many organisations deemed to the ‘world class’ - Cappelli and Tavis (2016).

So with this in mind, it seems sensible for the evidence-based school leader to look at the research evidence on effective performance management.  To help do this, I will turn to Gifford (2016) – who based on rapid evidence assessments produced by the Center for Evidence-Based Management – has written a report on what works in performance management.   In doing so, I will focus on five  issues; first, what evidence-base was used to inform the rapid evidence assessments; two, what do we mean by term – performance management; three, what works and in goal setting (and what doesn’t)?; four,what works in performance appraisals and what doesn’t; five, the implications for colleagues who have the ability to influence the design and implementation of performance management systems, within their schools. 

The evidence-base

·      Two rapid evidence-assessment carried out by the Center for Evidence-Based Management – which included

o   On goal setting - 34 meta-analyses and 19 single studies

o   On performance management - 23 meta-analyses and 37 single studies.

A definition of performance management

One of the problems with discussing what works in respect to performance management is that there is no agreed or definitive definition of performance managements.   Gifford notes that performance management is viewed as an activity

·      Establishes objectives

·      Improves performance

·      Holds people to account

What works in goal setting (and what doesn’t)?

·      Challenging, clear and specific goals in relation to relatively straightforward tasks i.e those which are familiar and predictable

·      Challenging, clear and specific goals tend to work less well on complex tasks and have a negative impact on performance.

·      In complex tasks what tends to work are more general ‘do your best’ outcome goals – the research suggests this is because ‘do-your-best’ goals encourage people to think about task relevant ways to achieve their goals.  Whereas, specific challenging goals leads to people focussing on the potential negative consequences of failure.

·      It is necessary to distinguish between outcome goals and behavioural and learning goals. Behavioural and learning goals are the most effective way of driving performance for as long as it takes for people to master those set of skills

·      Short-term goals tend to help when employees are learning new skills or at an early developmental stage of their careers

·      Internal or self-set goals tend to work no better than external or assigned goals.  The power of external set goals comes from the external expectations, which are more motivating

·      Individuals who have a learning orientation (process) respond better to goals than people who have a performance orientation (outcomes)

·      People who view themselves in terms of their own personal ability, preferences or values – gravitate towards individual goals. People who views themselves primarily in terms of their relationship with others gravitate towards team or group goals.

·      Providing people with feedback on how they are doing against their goals increases the chances of those goals being reached.  

What works in performance appraisals (and what doesn’t)?

·      It’s people’s reactions to the feedback not the feedback itself that matters (see https://evidencebasededucationalleadership.blogspot.com/2014/05/how-to-get-better-at-receiving-feedback.html)

·      It makes sense to check in with staff following an appraisals to check out how the ‘feedback’ has landed and whether there any issues which need to be addressed

·      People want fairness and procedural justice (see https://www.garyrjones.com/blog/2018/04/teacher-retention-does-answer-lie-with.html)

·      People should ideally not self-assess their performance but instead get evidence about performance from other sources– and it does not matter how this done.

·      Appraisal conversations which are genuinely two way leads individuals responding more favourably

·      The quality of the relationship between the appraiser and the appraisee influences whether appraisals leads to better performance or not.

·      There is some evidence that it is better to focus on building on strengths rather than fixing weaknesses i.e focus on the positive rather than the negative

·      Personality variables moderates employee’s reaction to feedback especially negative feedback

Implications for the design and implementation of performance management systems within schools?

·      It might be worth auditing elements of your current performance management system against the evidence-based findings – particularly the use of SMART objectives for complex tasks

·      Where there is a misalignment between your current system and the  summary of ‘evidence’ have a look at the research evidence to see whether there are any ‘nuances’ in the research – which you need to be aware of.

·      Remember – just because something worked ‘somewhere’ or appears to work ‘widely’ does not mean it will automatically work in your setting.

·      How much time are you spending with colleagues to help them improve on how they receive and act on feedback?

·      Are you trying to combine accountability and development within the same system, if so, there’s a good chance that you will ‘fall between two stools’ with your system failing to meet the needs of the various interested partiies

·      Evidence-based practice is not limited to teaching and learning but extends to all aspects of the work of the school or trust.

·      To what extent was research evidence used when designing the current performance management system?

·      Are there other school systems/processes which would benefit from the an ‘evidence’ informed review/audit

And finally

If you are interested in finding out more about how to become an evidence-based manager, I’d recommend that you have a look at two recently published books - Latham (2018) and Barends and Rosseau (2018).  Alternatively, you may wish to have a look at my own recently book on evidence-based school leadership and management- Jones (2018)

References

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page.

Cappelli, P. and Tavis, A. (2016). The Performance Management Revolution. Harvard business review. 94. 10. 58-67.

Gifford, J. (2016). Could Do Better? Assessing What Works in Performance Management Research Report. London Chartered Institute of Personnel and Development

Jones, G. (2018). Evidence-Based School Leadership and Management: A Practical Guide. London. SAGE Publishing.

Latham, G. (2018). Becoming the Evidence-Based Manager (Second Edition). London Nicholas Brearley Publishing. tasks