When calls for silence lead to shouting

Over the last few days my Twitter timeline has been inundated with Tweets about the rights and wrongs of pupils being silent in corridors. Now one of the problems with Twitter and Tweets  is that they often do not provide subtlety and nuance – and the Tweets about ‘silence’ have ironically led to many of what can only be described ‘shouty’ Tweets.  So this post – which will not take sides on the issue of ‘silence’ - will look at the method developed by philosopher Stephen Toulmin for analysing arguments.  Indeed, potentially Toulmin’s method works extremely well where there are no clear truths or absolute solutions or a problem, so is likely to work well as a structure for analysing the arguments for and against ‘silence’.  The rest of this post will seek to provide:

·      An outline of Toulmin’s structure of an argument;

·      An application of Toulmin’s structure to ‘silence’ between classrooms;

·      A discussion around the use of Toulmin’s structure within schools.

Toulmin’s structure of an argument

  • The claim (C) or conclusion i.e. the proposition or statement of opinion that that the author is asking to be accepted

  • The facts or grounds (G) we appeal as the basis for C, also called data i.e. in other words, the specific facts relied on to support a claim

  • The warrant (W) – what links the grounds to the claim - which is the general rule that allows us to infer a claim and gives us permission to go from G to C

  • Behind our warrant will be backing (B) – which is the body of experience and evidence that supports the warrant

  • The qualifier (Q)  – which is a word or phrase which indicates the strength conferred on the inference from the grounds to the claim – in other words, the strength of the support for the claim.

  • Rebuttals (R) – these are extraordinary or exceptional circumstances that would undermine the supporting grounds

Example Teachers should make greater use of research evidence

·      Claim, Teachers should make greater use of research evidence of ‘what works’ when planning teaching and learning

·      Grounds, Teachers make little use of research evidence of ‘what works’ when planning teaching and learning - recent research states that only 23% of teachers use the EEF’s Teaching and Learning Toolkit https://www.suttontrust.com/research-paper/best-in-class-2018-research/

·      Warrant, Some teaching strategies and techniques bring about greater increases in learning than other teaching strategies

·      Backing, The best available evidence from systematic reviews, meta-analyses and meta-meta-analyses.

·      Qualifier, Presumably teachers will have the skills and knowledge to use the research backed strategies. 

·      Rebuttal, However, not all students are alike, some  students may not benefit from the approach.  In addition, the resources  needed for successful implementation are not always available?

Silence between classrooms

So let’s try and use this structure to help us construct and understand the arguments for and against ‘silence’ between classrooms.

Silence

  • Claim - Pupils should move silently between classrooms

  • Grounds - Many pupils are bullied when moving between classrooms

  • Warrant - Pupils have a right to move between classrooms without being bullied

  • Backing - Personal experience

  • Qualifier - Presumably

  • Rebuttal - There may be occasions where it is appropriate for pupils to being talking when moving between classrooms

Non-silence

  • Claims - Pupils should have the opportunity to speak to one another when moving between classrooms

  • Grounds - The vast majority of pupils behave appropriately when moving between classrooms

  • Warrant - We need to demonstrate to  pupils that we trust them to behave in an appropriate manner

  • Backing - Personal experience

  • Qualifier - Presumably

  • Rebuttal - There may be occasions where it is appropriate for pupils not to talk when moving between classrooms  

Now I need to stress two things.  First, these are not the only arguments for and against ‘silent’ movement between classrooms – but rather should be seen as attempts to show how Toulmin’s structure could work for both sides of an argument.  Second, the examples create the impression that there is a binary divide between for an against ‘silent movement’ – that is not the intent.

Implications of using Toulmin’s structure for analysing arguments

  • It’s worth spending some time understanding the Toulminian structure of arguments as it will help you articulate your own arguments more clearly.

  • Using the Toulmin’s structure will make it easier for your you to display the first of Rapoport’s rules for disagreeing i.e. attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.” Dennett (2013)

  • This is not the only way to think about the structure of arguments - see Cartwright and Hardie (2012).

So where does this leave us when it comes to ‘silence between classrooms’

I must admit when I began writing this post, the very process of going through the Toulminian structure made me ‘think’ and then  ‘think’ again.  In particular, it made me realise how many discussions on Twitter don’t involve arguments but rather they are about competing claims, which form only a small part of an argument  At the very least an argument requires – grounds, a claim and evidence – whereas from what I see most Tweets or series of Tweets is that they make little or no reference to the ‘grounds/evidence/data’ on which the claim is based.   That said, since I started writing this post I came across a blog from @ClareSealy https://primarytimery.com/2018/10/23/corridors/amp/?__twitter_impression=true which clearly articulates the grounds/evidence/data  for ‘silent movement’ between classrooms.  On the other hand, you may wish to have a look at https://www.telegraph.co.uk/news/2018/08/23/school-banned-talking-corridors-sees-10-per-cent-increase-results/ or https://suecowley.wordpress.com for the alternative view.

And finally

If you are interested in using the Toulminian structure, I suggest that you have a look at either  Kvernbekk (2013) or (2016).  Alternatively you wish to have a read of  at Jenicek and Hitchcock (2005). In addition, there are plenty of ‘Toulmin’ resources available on the Internet – although as always – not all of the material is of the same quality.

References

Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.

Dennett, D. (2013). Intuition Pumps and Other Tools for Thinking. London. Allen Lane.

Jenicek, M. and Hitchcock, D. (2005). Evidence-Based Practice: Logic and Critical Thinking in Medicine. United States of America. American Medical Association Press

Kvernbekk, T. (2013). Evidence-Based Practice: On the Function of Evidence in Practical Reasoning. Studier i Pædagogisk Filosofi. 2. 2. 19-33.

Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.

PS This post was amended on Thursday 25 October when I deleted the name of the school whose approach to ‘silent corridors’ set off the Twitterstorm.  

 

 

 

Performance Management - what does the evidence say?

This week has seen the TES publish two articles on performance management.  One article was by Joe Baron – a pseudonym for a teacher of history – who felt he had been a victim of performance management and was being held accountable for results beyond his control.  The second article was by Rebecca Foster – identifying five ways in which performance management could be improved  and which include: be clear about the goal; make sure it’s a process ‘done with’; think carefully about how targets can be met; be clear about when things should happen; and, don’t set meaningless targets.  As such, there is little doubt that performance management can be both distressing for appraisees and difficult for appraisers to get right.  Indeed, given some of these difficulties there are an increasing number of reports that in the world of business that the annual performance management cycle is being abandoned by many  organisations, including many organisations deemed to the ‘world class’ - Cappelli and Tavis (2016).

So with this in mind, it seems sensible for the evidence-based school leader to look at the research evidence on effective performance management.  To help do this, I will turn to Gifford (2016) – who based on rapid evidence assessments produced by the Center for Evidence-Based Management – has written a report on what works in performance management.   In doing so, I will focus on five  issues; first, what evidence-base was used to inform the rapid evidence assessments; two, what do we mean by term – performance management; three, what works and in goal setting (and what doesn’t)?; four,what works in performance appraisals and what doesn’t; five, the implications for colleagues who have the ability to influence the design and implementation of performance management systems, within their schools. 

The evidence-base

·      Two rapid evidence-assessment carried out by the Center for Evidence-Based Management – which included

o   On goal setting - 34 meta-analyses and 19 single studies

o   On performance management - 23 meta-analyses and 37 single studies.

A definition of performance management

One of the problems with discussing what works in respect to performance management is that there is no agreed or definitive definition of performance managements.   Gifford notes that performance management is viewed as an activity

·      Establishes objectives

·      Improves performance

·      Holds people to account

What works in goal setting (and what doesn’t)?

·      Challenging, clear and specific goals in relation to relatively straightforward tasks i.e those which are familiar and predictable

·      Challenging, clear and specific goals tend to work less well on complex tasks and have a negative impact on performance.

·      In complex tasks what tends to work are more general ‘do your best’ outcome goals – the research suggests this is because ‘do-your-best’ goals encourage people to think about task relevant ways to achieve their goals.  Whereas, specific challenging goals leads to people focussing on the potential negative consequences of failure.

·      It is necessary to distinguish between outcome goals and behavioural and learning goals. Behavioural and learning goals are the most effective way of driving performance for as long as it takes for people to master those set of skills

·      Short-term goals tend to help when employees are learning new skills or at an early developmental stage of their careers

·      Internal or self-set goals tend to work no better than external or assigned goals.  The power of external set goals comes from the external expectations, which are more motivating

·      Individuals who have a learning orientation (process) respond better to goals than people who have a performance orientation (outcomes)

·      People who view themselves in terms of their own personal ability, preferences or values – gravitate towards individual goals. People who views themselves primarily in terms of their relationship with others gravitate towards team or group goals.

·      Providing people with feedback on how they are doing against their goals increases the chances of those goals being reached.  

What works in performance appraisals (and what doesn’t)?

·      It’s people’s reactions to the feedback not the feedback itself that matters (see https://evidencebasededucationalleadership.blogspot.com/2014/05/how-to-get-better-at-receiving-feedback.html)

·      It makes sense to check in with staff following an appraisals to check out how the ‘feedback’ has landed and whether there any issues which need to be addressed

·      People want fairness and procedural justice (see https://www.garyrjones.com/blog/2018/04/teacher-retention-does-answer-lie-with.html)

·      People should ideally not self-assess their performance but instead get evidence about performance from other sources– and it does not matter how this done.

·      Appraisal conversations which are genuinely two way leads individuals responding more favourably

·      The quality of the relationship between the appraiser and the appraisee influences whether appraisals leads to better performance or not.

·      There is some evidence that it is better to focus on building on strengths rather than fixing weaknesses i.e focus on the positive rather than the negative

·      Personality variables moderates employee’s reaction to feedback especially negative feedback

Implications for the design and implementation of performance management systems within schools?

·      It might be worth auditing elements of your current performance management system against the evidence-based findings – particularly the use of SMART objectives for complex tasks

·      Where there is a misalignment between your current system and the  summary of ‘evidence’ have a look at the research evidence to see whether there are any ‘nuances’ in the research – which you need to be aware of.

·      Remember – just because something worked ‘somewhere’ or appears to work ‘widely’ does not mean it will automatically work in your setting.

·      How much time are you spending with colleagues to help them improve on how they receive and act on feedback?

·      Are you trying to combine accountability and development within the same system, if so, there’s a good chance that you will ‘fall between two stools’ with your system failing to meet the needs of the various interested partiies

·      Evidence-based practice is not limited to teaching and learning but extends to all aspects of the work of the school or trust.

·      To what extent was research evidence used when designing the current performance management system?

·      Are there other school systems/processes which would benefit from the an ‘evidence’ informed review/audit

And finally

If you are interested in finding out more about how to become an evidence-based manager, I’d recommend that you have a look at two recently published books - Latham (2018) and Barends and Rosseau (2018).  Alternatively, you may wish to have a look at my own recently book on evidence-based school leadership and management- Jones (2018)

References

Barends, E. and Rosseau, D. (2018). Evidence-Based Management: How to Use Evidence to Make Better Organizational Decisions. London. Kogan-Page.

Cappelli, P. and Tavis, A. (2016). The Performance Management Revolution. Harvard business review. 94. 10. 58-67.

Gifford, J. (2016). Could Do Better? Assessing What Works in Performance Management Research Report. London Chartered Institute of Personnel and Development

Jones, G. (2018). Evidence-Based School Leadership and Management: A Practical Guide. London. SAGE Publishing.

Latham, G. (2018). Becoming the Evidence-Based Manager (Second Edition). London Nicholas Brearley Publishing. tasks

Supporting evidence-based practice: Who should be doing what?

Being a teacher is hard enough, without spending unnecessary amounts of time on practices that just don’t make any real kind of an impact.  It’s not enough for an intervention or practice to have merit, it needs to be ‘worth it’.

Focus on the use of research evidence within schools is on the increase. So how can we make sure that our approach to using evidence has the impact we want it to, and that the use of evidence is supported at all levels within the education eco-system?

In this post, and drawing upon my recently published book, Evidence-Based School Leadership: A practical guide I examine how different roles can make a contribution to evidence-based practice within schools.

This post was originally published on http://www.cem.org/blog/evidence-based-education-who-should-be-doing-what/ on Wednesday, 17 October, 2018

How can we trust research

A major challenge for anyone interested in the use of research within both schools and the wider education system is to try and make some kind of judgment about the trustworthiness and quality of research.

Indeed, this is a real problem, as it is highly unlikely that teachers, heads of department, or school leaders will have the required level of expertise to critically examine the claims that are being made.

So, to help get around this problem of a lack of expertise and increase the chances of being able to judge the usefulness of research, in my new book, Evidence-based School Leadership and Management, I have looked at a helpful framework which provides you with a useful scaffold to help you make efficient and effective use of research evidence in bringing about improvements within your classroom, school or educational setting: 

The 6 A’s of the usefulness of research evidence 

Professor Steve Higgins, of Durham University, developed the framework the '6 A's of the usefulness of research' by asking a number of questions: 

Is the research you are attempting to use: 

  • Accurate

  • Accessible

  • Applicable

  • Acceptable

  • Appropriate

  • Actionable

I have subsequently worked with Professor Higgins to develop a more fleshed-out version of the 6 A’s, which draws upon both the TAPUPAS framework developed by Pawson, Boaz, et al. (2003) and Critical Reading and Writing for Postgraduates (Third Edition) by Wallace and Wray (2016) and which is illustrated in the following table.

This post was first published on Centre for Evaluation and Monitoring blog on 9 October, 2018

Is Twitter your best source of CPD?

In my former life as a teacher, I all too often experienced the following: I would do what I hoped was a really interesting lesson which explored deeply profound truths,  mixed in with the odd ‘throw away’ comment to lighten the mood.  Unfortunately, my pupils would completely forget the profound truths but remember the glib, throw away comment. Despite my best efforts to learn from my experience so as to not make the same mistake in my new role as occasional speaker and blogger, similar experiences seem to be following me around. 

Recently I was delivering a keynote on the principles of evidence-based practice and I made an off-the cuff remark along the lines of ‘if Twitter is your best CPD, get help.’ This comment was soon ‘out-there’ on social media and the Twittersphere and was being retweeted with some commentators taking offence at the tone of my comment. Now, given that I normally go out of my way to try and avoid making inflammatory comments on Twitter, but more importantly I like to try and support whatever comments I make with some form of evidence, I felt I had let myself down.  So I welcomed with open-arms the recent publication of a systematic review on formally organised and informally developed professional learning groups by Lantz-Andersson, Lundin, et al. (2018), which I hope will now allow me to make some informed comment on the use of Twitter and teacher CPD. 

………………..

This blog was first published on the TDT website on Tuesday 25 September, 2018