Boss competence and teacher well-being

In this post I will be looking at the work of Artz, Goodall, et al. (2017) and the relationship between ‘boss competence and worker-well being.  The relationship between 'boss competence and teacher well-being' is particularly relevant to schools given concerns about both teacher workload, stress-levels and well being and the number of teachers leaving the teaching profession.    I’ll then go onto explore some of the possible implications of the research for schools and school leaders.  Finally, I'll undertake a structured critique of Artz et al's research.

Boss competence and worker well-being: a brief summary of the research.

Artz, Goodall, et al. (2017 state that:
  • Nearly all workers have a supervisor or “boss.” 
  • Little is known about how bosses influence the quality of employees’ lives.
  • A boss’s technical competence is the single strongest predictor of a worker’s job satisfaction.
  • If a worker stays in the same job and workplace, a rise in the competence of a supervisor is associated with an improvement in the worker’s well-being.
  • In a cross-section of 6000 young U.S. workers, the job satisfaction of employees is positively associated with whether the supervisor worked his or her way up within the company (or started the company).
  • In a cross-section of 1600 British workers, satisfaction levels are higher among individuals whose supervisor could if necessary step in competently to do that job.
  • In pooled cross- sections totalling 27,000 individuals, workers’ job satisfaction is highly correlated with the competence of supervisors.
  • These results support the claim that both competence – linked to expert knowledge – and industry experience improve workers’ job satisfaction.
Some possible headline  implications for schools

There are number of 'first-blush'  implications for the leadership and management of schools which members of a school community may choose to draw from these research findings.  For example
  • If you want to increase the well-being of staff increase the competence of ‘bosses’ and line managers
  • If you want to increase job satisfaction then internal appointments – with individuals working their way up through the school – may lead to increased job satisfaction of those they supervise.
  • Senior staff within a school should keep their ‘hand-in’ as teachers to ensure they can competently cover for absent teaching colleagues.
  • If you want to increase teacher well-being appoint leaders who have a background in education and schools rather than appointing someone who has generic leadership and management experience.
However, before jumping to these conclusion it is necessary to look at Artz et al's research in more detail to see whether it is useful for schools and school leaders.  To help me do this  I’m going to use  Professor Steve Higgins 6 As model for effective research use: accessibility, accuracy, appropriate, acceptable, applicable, and actionable

The 6 A’s

Accessibility – given the very high level  maths involved in the paper,  the research is not easily intellectually accessible to school leaders and others interested in teacher well-bing

Accuracy – again this is extremely difficult for the lay reader to judge.  However, the authors do identify some significant limitations in the report, for example, what is meant by boss competence is not clear; there is no reliable and valid instrument to measure boss competence; many of the measures used for boss competence where highly subjective; insufficient attention was paid to external factors that may be influencing both perceptions of boss competence and ‘well-being.’

Appropriate – although multiple sources of evidence were used, none of the evidence used appeared to be generated from research into schools and other similar environments. 

Acceptable – the findings of the research would appear to be at first sight to be broadly consistent with teachers values and beliefs – i.e. to be a senior leader must still be competent in the classroom.

Applicable – the research is relevant to schools given concerns about both teaching staff retention and well-being.

Actionable – the research does not appear to meet Argyris (2000) criteria for actionable advice in that it does not,  ‘specifies the detailed, concrete behaviours required to achieve the intended consequences; it must be crafted in the form of designs that contain causal statements; people must have, or be able to be taught, the concepts and the skills required to implement those causal statements; and the context in which it is to be implemented does not prevents its implementation’.  p8

So what can we make of the research and the implications for schools

Nevertheless for me, the main value of Artz, et al. (2017) is that it directed my attention to a topic known as ‘expert leadership’ and the work of Goodall and Bäker (2015). Now, one of the key questions  ‘expert leadership’ seeks to explore is whether experts and professionals – such as teachers and headteachers - need to be led by other experts and professionals, those who have a deep understanding of and high ability in the core-business of their organization.   In future posts I will examine the notion ‘expert leadership’ and its implications for schools in more detail.

References

Argyris, C. (2000). Flawed Advice and the Management Trap: How Managers Can Know When They're Getting Good Advice and When They're Not. Oxford. Oxford University Press.
Artz, B. M., Goodall, A. H. and Oswald, A. J. (2017). Boss Competence and Worker Well-Being. ILR Review. 70. 2. 419-450.
Goodall, A. H. and Bäker, A. (2015). A Theory Exploring How Expert Leaders Influence Performance in Knowledge-Intensive Organizations. in  Incentives and Performance.  Springer.




Teaching Staff Turnover and Employee Engagement

During this week's #UKEDResChat discussion a number of Tweets mentioned concerns about levels of teaching staff turnover and how to go about creating reliable and valid measures of staff satisfaction and engagement.  So with that in mind, I thought I'd have a look at Bamford and Worth (2017) and their report for the NFER as to the reasons why teachers leave the teaching profession.   I will then focus on one recommendation of the report i.e. the need for schools to measure job satisfaction and engagement and intervene -  to show how that might be easier said than done.

Why do teachers leave the teaching profession?

Drawing on data collected from 40,000 households as part of the Understanding Society longitudinal study, Bamford and Worth (2017) found the following.

  • More than half of non-retiring teachers who leave remain working in the education sector.
  • Teachers do not leave for higher- paid jobs: overall pay decreases, but hourly wages stay the same.
  • Leavers' working hours decrease and many secondary leavers take up part-time positions.
  • Leavers' job satisfaction and subjective well-being improve after leaving. 

Bamford and Worth then go on to make the following recommendations

  • School leaders should regularly monitor the job satisfaction and engagement of their staff, and intervene 
  • Government and other secondary-sector stakeholders need to urgently look at ways of accommodating more part-time working in secondary schools 
  • School leaders, Government and Ofsted need to work together to review the impact their actions are having on teacher workload, to identify practical actions that can be taken to reduce this 

An evidence-based approach to monitoring job satisfaction and engagement 

Monitoring job satisfaction and engagement and susbequently intervening may seem a very sensible and obvious recommendation.  However, it may be a lot easier said than done .  So to help understand why this might be case I'm going to look at the work of  Briner (2014) who raises some very pertinent questions about employee engagement.  So here goes:

Defining engagement - unfortunately there is no one agreed definition of engagement. 

The consequence of this is as Briner states: From a practical (and academic) perspective the absence of agreement about what something means - and an absence of concern about that lack of agreement - is not funny or weird or cute or unfortunate or inconvenient. It's a confused, confusing and chaotic mess that is almost bound to lead to messy and undesired outcomes. It means that whenever we talk about or think about or try to measure 'engagement' we are almost certainly saying different things, understanding different things, measuring different things and doing different things but believing quite incorrectly they are all the same. 

Measuring engagement - if there is no agreement about the nature of employee engagement the chance of developing valid, reliable and meaningful measures are slim. 

Again as Briner states: As a consequence of confused definition and overlap with other existing ideas there is currently little evidence that engagement measures are particularly valid or reliable. There is one crucial form of validity - predictive validity - for which there seems to be almost no evidence at all. This form of validity is essential as it explores whether measures, in this case of engagement, actually predict anything important in the future. At the present time therefore we do not have enough good quality evidence to allow us to draw even tentative conclusions about whether or how engagement can be measured in a valid and reliable way.

Engagement is nothing new or different 

Briner poses two questions about whether engagement is a new or different concept

Engagement is not a new and different idea: If this is the case then the term and idea should be immediately discontinued because using a new term to describe existing concepts is confusing and unhelpful. 

Engagement is a new and different idea: If this is the case then there is a huge amount of work to be done first to define engagement in a way that shows precisely how it is new and different and second to gather good quality evidence to show that measures of engagement are measuring something new and different. 

There is lack of good quality evidence about employee engagement

As Briner states '

There is almost no good quality evidence with which to answer the most important questions about engagement:
Fundamental Question 1: 'Do increases in engagement cause increases in performance?' 
Fundamental Question 2: 'Do engagement interventions cause increases levels of engagement and subsequent increases in performance?' 

Over-claiming and misclaiming

Briner argues that these four challenges raise serious challenges about the usefulness of the idea of employee engagement.  Nevertheless, there is an additional challenge;

That the proponents, supporters and advocates of engagement both over-claim by exaggerating the quantity and quality of evidence and mis-claim by making statements about engagement that, on closer inspection, seem to be about something else.

What are the implications of this discussion for school leaders who wish to monitor job satisfaction and engagement.

  • It will be a waste of time and resources for the school to try and develop its own valid and reliable measures of employee engagement
  • Staff surveys are highly likely to tell you very little, indeed as Argyris (1994) states may even get in the way of learning what needs to be done.
  • Multiple proxy measures of employee engagement are going to be required to help school leaders make a judgement about employee engagement.

And finally

How school leaders tackle the challenge of employee engagement comes down to a choice as to type of school leaders they want to be.  Are they school leaders who carefully examine the evidence on a particular, being explicit about what they know or don't know and then act accordingly. Or do they want to be school leaders who are not overly bothered about the quality of the evidence, subsequently misclaim and misrepresent the evidence for their own purposes and come up with superficial solutions to complex issues.  The choice is yours! (Amended from Briner)

References

Argyris, C. (1994). Good Communication That Blocks Learning. Harvard business review. 72. 4. 77-85.
Bamford, S. and Worth, J. (2017). Teacher Retention and Turnover Research. Research Update 3: Is the Grass Greener Beyond Teaching? Slough. NFER
Briner, R. (2014). What Is Employee Engagement and Does It Matter? An Evidence-Based Approach. The Future of Engagement Thought Piece Collection. 51.

The School Research Lead and Teacher Journal Clubs - Summarising the evidence


In this post I look at how a school research lead might wish to summarise the evidence about teacher journal clubs.  In doing so, I will try and make we have a format that allows the including of four sources evidence and also takes into the context of the individual.  However, given the workload pressures, it is recognised that whatever report or document is produced, can be produced relatively quickly and without being burdensome.  As such, whilst the example uses a Word based tabular format, the same information could also be presented on 10 - 12 slide PowerPoint or through the use of some kind of mind map

The template

The following example has been produced for a fictional school, which is considering introducing a teacher journal club into its professional learning programme.  As will be seen from the example, the school is relatively new to research and is just beginning to put its 'toe in the water'.

Title
 Teacher Journal Clubs
 Background question


How can teacher journal clubs contribute to teacher professional learning and the use of evidence-based practice?
Summary
 Teacher journal clubs appear to have the potential to contribute to the increased use of evidence-informed practice.   Initial discussions with stakeholders suggest there is support for piloting a journal club within the school.   Although, no-one within the school – be it teaching assistants, teachers and senior leadership – have experience in running journal clubs, adequate resources are available on the internet to support their introduction. 
Description of the best available evidence
Research
Although there appears to be no systematic reviews in educational settings about use of teacher journal clubs, a systematic reviews in a health setting (Deenadayalan et al., 2008) provides guidance on how to run a successful journal club.  This guidance suggests: regular and anticipated meetings, mandatory attendance, clear long- and short-term purpose, appropriate meeting timing and incentives, a trained journal club leader to choose papers and lead discussion, circulating papers prior to the meeting, using the internet for wider dissemination and data storage, using established critical appraisal processes and summarizing journal club findings. (from abstract)
Recent research in education (Sims et al., 2017)involving two 11-18 mixed secondary schools (Ofsted – outstanding) indicates that journal clubs are a viable, scalable model of teacher-led professional development, capable of creating sustained increases in evidence-informed practice 
School Data
The school is a mixed 11-18 school and is currently rated by Ofsted as good.  The school has an extensive programme of professional learning – though little or none is focused on research use.  The school has recently recruited a number of new staff who are at the beginning of their career.  However, there are also a number of staff who have been at the school for over twenty years. Although in recent years the professional learning budget has been squeezed – there is still sufficient time in the programme for half-termly journal clubs


Stakeholders’ views (pupils, staff, parents, community)
 A number of teachers within the school are active on Twitter and are aware that the school currently provides few opportunities for teachers to engage in research evidence.  Successful schools in the locality have introduced journal clubs and it is perceived that this has contributed to those schools’ reputation for innovation.  However, there are other teachers who do not see the value of educational research and are aware of schools which have introduced journal clubs – and then have quietly dropped them after a year.  Nevertheless, there is a general consensus amongst the teaching staff that it may be worth undertaking a small pilot with volunteers.


Practitioner expertise – key leaders
 None of the major decision-makers within the school – the HT, 2 DHTs and the newly appointed School Research Lead (SRL)– have experience of running or participating in a journal club.  However, the SRL has attended a number of researchED events and has seen presentations on how to successfully run a journal club.  The SRL is also aware of resources available on the Internet and produced by teachers – which give clear advice on how to ensure a journal club is successful.  In addition, the SRL is currently studying for a post-graduate degree in education.


Questions for consideration


·      Can we access suitable research journals?
·      How do recruit volunteers for the pilot?
·      Do teachers have the capacity and capability to understand and apply research findings?
·      Do we have someone of sufficient knowledge and expertise to lead the journal club?
·      Can desired changes in teaching practice can be identified?
·      Is sufficient time available for the implementation of journal club?
·      How will the impact of the journal club be measured?

References and resources



·      (Deenadayalan et al., 2008)
·      (Sims et al., 2017)
·      http://www.edujournalclub.com


Appraiser/author



·      School research lead
Dissemination



·      To be shared by email and to be discussed at the next staff meeting
·      Prior discussion of paper at departmental meetings


Update and review

·      When is it likely that new relevant evidence be available?
·      During 2018 as reports on the efficacy of Research Learning Communities and the School Research Leads are published by the EEF.
·      End of the academic year

The School Research Lead and making the most of journal clubs - recommendations from a systematic review

In this week’s post, I will be taking a further look at the research on journal clubs and in particular a systematic review by (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008)

The systematic review

(Deenadayalan et al., 2008) over identified 101 articles, of which 21 comprised the body of evidence, with 12 describing journal club effectiveness within healthcare settings.  Over 80% of the papers noted that journal clubs were effective in improving participants’ knowledge and critical appraisal skills.  Nevertheless, none of the papers reported on how this then manifested itself in changes in practice.

Findings

Although the articles reviewed often differed in terms of participants, processes and evaluation, (Deenadayalan et al., 2008) argue that there were a range of consistent findings vis a vis the effectiveness of journal clubs in developing participant’s knowledge and critical appraisal skills.  As such, Deenadayalan et al have been able to identify a number of recommendations for the conduct of a journal club, which if adopted, increases the journal club’s chances of succees

Journal club attendance
  • Establish a journal club group of members of the same discipline, or similar interests within a clinical specialty. 
Journal club purpose
  • Have an established and agreed overarching goal for the long-term journal club intervention. The overarching journal club purpose should be reviewed regularly, and agreed by participants
  • Establish the purpose of each journal club meeting, and link this to the paper being read, or the skill acquisition being addressed.
Structure of an effective journal club
  • Regular attendance should be expected and recorded. Attendance may be mandatory, particularly if the journal club has a curriculum-based format
  • Conduct journal clubs at regular predictable intervals (suggest monthly)
  • Conduct journal club at an appropriate times of the day for all participants
  • Provide incentives to attend such as food (which is shown to increase attendance as well as the conviviality of the occasion).
Leading journal club
  • Journal clubs appear to be more effective if they have a leader. The journal club leader should be responsible for identifying relevant articles for discussion, however the final choice needs to be decided by the journal club members
  • Train the leader/facilitator of the journal club in relevant research design and/or statistical knowledge so as to appropriately direct group discussions and assist the group to work towards its goals
  • The leader can change from meeting to meeting, however he/she needs to have the skills to present the paper under discussion and lead the group adequately. It is a fine balance between choosing a leader of high academic standing whose expertise may stifle discussion,
  • or choosing a leader from peers who may not have the requisite understanding of the paper under discussion
  • Provide access to a statistician to assist the leader in preparing for journal club, and to answer questions that may arise from the journal club discussion.
Choosing articles for discussion
  • Choose relevant case-based or clinical articles for discussion. These papers should be of interest to all participants. Articles should be chosen in line with the overarching purpose of the journal club
  • Identify one journal club member (either the designated leader or a member) who has the responsibility for identifying the literature to be discussed for each meeting. This person should also lead the discussion on the article at the journal club. 
Circulating articles for discussion
  • Provide all participants for each journal club (in addition to the leader) with pre-reading at a suitable time period prior to the journal club (may be up to a week prior). Participants should agree to the time frame for pre-reading. In some curriculum-based situations, assessment of whether pre-reading has occurred may be appropriate
  • Use the internet as a means of distributing articles prior to the meeting, maintaining journal club resources and optimizing use of time and resources. 
Efficiently running the journal club
  • Use established critical appraisal approaches and structured worksheets during the journal club session, which leads to healthy and productive discussion
  • Formally conclude each journal club by putting the article in context of clinical practice.

Journal club effectiveness
  • Depending on the journal club purpose, it may be appropriate to evaluate knowledge uptake formally or informally
  • Evaluation should specifically relate to the article(s) for discussion, critical appraisal, understanding of biostatistics reported in the paper and translating evidence into practice. (Deenadayalan et al., 2008) p 905-6

How relevant are these findings to schools?

It seems to me, that these findings are broadly applicable to school-based research clubs, and could be easily adapted to meet the needs of individuals schools.   Nevertheless, there are a number of points which are worth further consideration.

First, depending on the nature of journal being reviewed, it makes a lot of sense to get an expert in statistics involved.  Anyone who has read (Gorard, See, & Siddiqui, 2017) recent book will be aware of some of the challenges in the correct use and interpretation of p values, statistical significance and confidence interval.  As for effect sizes, (Simpson, 2017) provides an interesting survey of the problems associated with effect sizes and subsequent use in ‘meta-analyses’.

Second, whoever leads the journal club will need to be viewed as credible by colleagues, not just in being able to find, select and understand and apply research.  The journal clubs leader also need to have ‘high-level’ interpersonal skills, so that they can skilfully navigate discussions, where colleagues disagree, or where colleagues have had deeply held values and beliefs challenged by the literature.  Indeed, it could be be argues that unless the research is ‘challenging’ if not controversial, then it is unlikely to provoke deep reflection on practice

Finally, given the workload pressures on teachers, and the relatively scant if non-existent evidences of journal clubs impacting upon day-to-day decision-making, very real consideration needs to be given to the reasons, why a journal club is being established.  As a mechanism to get ‘research’ into the classroom it is unlikely to have any impact whatsoever on teachers’ teaching and pupils’ learning.  If on the other hand, it is seen as an integral part of a wider process of building social capital (Hargreaves & Fullan, 2012) and developing a collaborative culture amongst teachers and other colleagues, it will be of some value.


Next week I’ll be looking at some school-based research by (Sims, Moss, & Marshall, 2017) into whether journal clubs can work and can they increase evidence-based practice.

Reference

Deenadayalan, Y., Grimmer-Somers, K., Prior, M., & Kumar, S. (2008). How to run an effective journal club: a systematic review. Journal of Evaluation in Clinical Practice, 14(5), 898-911. doi:10.1111/j.1365-2753.2008.01050.x
Gorard, S., See, B., & Siddiqui, N. (2017). The trials of evidence-based education. London: Routledge.
Hargreaves, A., & Fullan, M. (2012). Professional capital: Transforming teaching in every school. New York: Teachers College Press.
Simpson, A. (2017). The misdirection of public policy : Comparing and combining standardised effect sizes. Journal of Education Policy.
Sims, S., Moss, G., & Marshall, E. (2017). Teacher journal clubs: How do they work and can they increase evidence-based practice? Impact, 1(1).


The School Research Lead and Journal Clubs - Do we need a logic model?

There is currently much interest in the use of journal clubs within schools.  For example, both the Blackpool (St Mary’s Catholic Academy) and Durrington Research Schools are currently promoting the use of journal clubs in their schools.   In addition, we have the Chartered College of Teaching operating a ‘virtual journal club’ through its monthly book club.   However, it should be noted that this is nothing new as Beth Giddins has been promoting the use of journal clubs since 2015 (https://www.edujournalclub.com/journal-clubs/)    Nevertheless, there is very little research available on the effective use of journal clubs within schools, with (Sims, Moss, & Marshall, 2017) being a notable exception.   With that in mind, this post, will be the first in a series of blogposts which looks at the of journal clubs.  In doing so, I will be drawing upon a number of systematic reviews in medicine and health settings which look at how effective are journal clubs in supporting both continuous professional development and evidence-based decision-making (Harris et al., 2011) and (Deenadayalan, Grimmer-Somers, Prior, & Kumar, 2008).  However, the first post in the series will briefly examine a possible logic model/framework for use with teacher journal clubs.

Logic models

Put simply, a logic model graphically illustrates the components of an intervention in terms of inputs, output and outcomes.  Figure 1 illustrates how the various elements of a very simple logic model come together.   The inputs represent the resources that are put into the programme or intervention, money, time and skills.    The outputs, represent what is done, the activities associated with the programme and who the programme reaches.   Finally, the outcomes are the changes and benefits (and possibly, costs) which accrue in the short, medium and long-term, for examples change in teacher knowledge and skills, application in the classroom and improvements in pupil learning.

For a detailed explanation of logic models have a look at Better Evaluation 
Teacher Journal Clubs - A logic model

Adapting the work of (Harris et al, 2011) it's possible to be come up with a detailed logic model for for how a teacher journal club might work. 

The benefits of using a logic model

The benefits of developing a logic model can be found in more detail at Community Toolbox but for the purposes of our discussion the benefits of logic models.  
  • Logic models integrate planning, implementation, and evaluation. In other words, developing a logic model will give you a greater understanding of what needs to be done to make the innovation work, and at the same time gives you a framework for evaluation
  • Logic models help you make good matches between activities and effects.  By developing a logic model for a journal club it can held you spot those intended activities with no supporting activities and resources, and then make the suitable adjustments.
  • Logic models can help in the collaborative planning process.  The development a logic model is an iterative process and by working together this can help build a shared understanding of what needs to be done to make an intervention work.  It is also helpful when you are looking to disseminate an intervention within or between schools.
  • Logic model can help keep a focus on accountability and outcomes.  In schools where resources are increasingly scare a logic model can keep a focus on the outcomes of an intervention and whether the planned for outcomes are actually happening.  Hopefully, this will allow further r resources to be allocated when the journal club proves a success. 

However, as noted at Community Toolbox - logic models can have a number of weaknesses.
  • A logic model needs to be 'logical'.  If it is not, this will no doubt cause problems for colleagues seeking to implement the innovation
  • A logic model cannot capture all the variables and elements at work when trying to make an intervention work - so the logic model may move from being 'simple' to being 'simplistic'
  • A logic model can be both difficult and time consuming to create.  So there needs to be a clear trade-off between the time and effort put into creating the logic model and the subsequent benefits 
And finally

Next week I'll look at some of the evidence about what needs to be done to make sure your journal club is a success.