Can we survey our way to better employee engagement?

Unhelpful reports
Photo by on Unsplash

From a Gallup report issued in 2022, there was this conclusion: “Employee engagement levels in the UK are one of the worst in Europe with fewer than one in 10 UK employees feeling enthusiastic about their job.”

In the old days there were attitude surveys, and they just about always resulted in some sort of report.  Today they are called employee engagement surveys, and now sometimes employee experience surveys, but little has changed except the name.  Otherwise, the issues remain – including one over-riding thought.  If employee engagement surveys are such a good idea, how come there is still a problem?

Time for an explanation and some thoughts on how to do better?

Just get some data and make some changes

The logic is fine; survey our people, get some data, use data to change some stuff, employee engagement improves.  Except, in practice, according to Gallup at least, it doesn’t.  “Employee engagement levels in the UK ranked 33 out of 38 European countries, and suffered a two-point drop in score compared with last year’s survey.” If anything, it looks like it might be getting worse.  The good news, and goodness we need some, is that more and more organisations are taking this issue very seriously; the fact that we haven’t “shifted the needle” has in no way deterred us from trying.  Also encouraging is the recognition of the need for good data to guide decision-making.  However,  employee engagement is a complex output formed in an emergent way from many complex inputs, and will never be resolved through simple, one-dimensional actions.

But what data and what changes to make?

What are reports for?  If this is not clear, known and accepted by all, from the outset, and nailed down through some sort of charter, then it is highly likely that the purpose will either get distorted or forgotten completely.  In those circumstances, it sounds like another heavyweight report consuming loads of storage space and producing little in return.  Defining a clear purpose might seem like an indulgent use of time, but it is essential to staying on course.

Once purpose is clear, it will probably be possible to answer some more key questions: Who is in the target audience?  What are their goals?  How is the report supposed to help them achieve their goals?  Clarity here will help with making important design decisions, not only about the survey itself and the report generated but even more about the process by which the data is reviewed, analysed and used to design and implement improvement action.  Without that action there is little or no chance of anything actually getting better.

If the response rate to your current survey is in decline, take this as a strong indication that a fundementally different approach is required.  

And who should make the changes?

Another key question to help with those design decisions: is the report something that is ‘done to people’ or is it something that helps the employees themselves to identify and address the issues that need attention?  If the target audience is only managers (or even worse only select managers) don’t expect a lot of improvement action.  The ‘iceberg of ignorance’[1] will strike here.  As a generalisation, managers do not know what the key, operational issues are in their organisations – and they will know even less about the causes of those issues.  More of which later.

Then there is the risk of the report’s existence becoming an end in itself – another box ticked every year or two.  And the longer and more glossy the report is, the less likely that action will result.  Managers are busy people – however pretty the report may be, if it is long, there is little chance that it will be read and studied – and even less that it will stimulate improvement action.  And if the report does not result in some sort of improvement, then what is the point of the report?

[1]     The “iceberg of ignorance” is a concept popularized by a 1989 study by Sidney Yoshida. It posited that front line workers were aware of 100% of the floor problems faced by an organisation, supervisors were aware of only 74%, middle managers were aware of only 9%, and senior executives were aware of only 4% of the problems.

The presenting problem is rarely the one that needs to be fixed…

Symptoms of problems may be useful, in that they may help identify a cause or causes, but in themselves the have little value.  And the one thing that is for sure is that tackling symptoms alone is never going to solve the problem.  Try giving aspirin to someone suffering with a headache.  The symptoms may be controlled, but the person may die because of a much more serious ailment, the symptoms of which were being covered up by the aspirin treatment.

Employee engagement surveys will seldom produce any really valuable insights into causes, which are generally hidden from view.  In the absence if any diagnostic capability, they produce a description of the current state as it is experienced by the employees – but they are just symptoms.  Part of the issue here is the nature of the questions used in the surveys.  Generally, they are questions calling for some sort of judgement.  And direct, judgemental questions may identify symptoms but will never identify causes.  And so valid improvement action becomes an impossibility.  They also invite conscious and unconscious bias….

The opportunities to improve reside in your organisation, not anyone else’s

Finally, how does anyone know what constitutes a good result?  Comparisons with other organisations or aggregated statistics showing norms are not going to help identify whether or not the organisation is a good performer, against a researched model of what good looks like – except in merely relative terms.  “We are better than the average”.  Is the average itself good?  Or rubbish?  “OK, so the average is getting better and so are we”  Does that mean approaching close to perfection?  Or climbing out of the pit of ‘deplorable’?  (Whatever that might mean.)

What are the research data being used to identify an external standard for ‘good’?  If there is one, to what degree does the research identify individual management practices that can make the difference between success and non-success?  Back to symptoms and causes! 

What if we enabled our people to lead the change?

What if there were an alternative approach?  A very different means of generating data, one capable of providing insights into both symptoms and causes.  What if that diagnostic processes also identified top priority, key causes, with no need for long reports.  What if those insights were specific to each and every team in the organisation, thus bypassing the ‘iceberg’ problems.  What if those teams were supported by external facilitators to define their own safe-to-try improvements, to improve the climate in their own team.

Moreover, high levels of diversity and psychological safety in decision-making groups guarantee high levels of inclusion.   These groups always generate higher quality decisions and improvement where it is needed most – that is, where the problems are.  And guess what happens to levels of engagement?

Enabling each and every team in the organisation to work on those areas of practice that will have greatest benefit to their Employee Engagement is the essence of The Vitality Index. You can read more about it here.

If you are curious about shifting the approach to improving Employee Engagement or if you can’t bear the thought of commissioning another report or if you’d just like a thinking partner to help you nail that purpose statement, then we’d love to help.

This post was written by Denis Bourne, co-founder or Organisational Vitality.  Denis ran the research that the Vitality Index diagnostic is based upon.   He has over 40 years’ experience helping organisations become more autonomous, innovative, and change enabled, and, as a happy consequence, also enjoying fantastic employee experience.

Let's talk

The conversation has the potential to change the future of your organisation.