Governing your digital workplace with some Big Questions
“It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.”
W.Edwards Deming
Deming is often mis-quoted as having said the opposite, as if measurement is paramount. But his point was that something doesn’t become more important just because you can measure it.
Most analytics packages will generate a host of beautiful-looking charts and multiple drill-downs, but how many of those colourful donuts actually change how your digital workplace is managed? Often people seem overwhelmed by the data, or produce monthly PowerPoint decks with a commentary on why the lines go up or down, but no clear action plan behind it. Rather than leading to good management, it can inhibit it.
Part of the problem is that each analytics report in isolation is rarely a complete answer. Even the things that it can answer, such as “what was our most popular news story last month?” or “What were the most common search terms this week?” only give very tactical insights.
There’s a different start point to this that yields much better results: begin by asking “What are the big questions that we need to answer?”. If you are clear about why you need an answer to the question, then the action to take will also be clear.
Defining the Big Questions
Instead of starting with the reports you can generate, start with your strategic goals, and then frame them as ‘Big Questions’. Normally the business case for a digital workplace includes things like enhancing employee engagement, improving collaboration or increasing efficiency. So the top-level questions may be:
- Are our employees becoming more engaged with our strategy?
- Have we improved knowledge-sharing between business units?
- Have we made our HR services more efficient for employees?
You’ll note that these aren’t just arm-waving aspirations such as ‘improve collaboration’ – I’d hope that your strategic goals are more concrete than that.
Also, the Big Questions should be meaningful and actionable – if the answer wouldn’t change what you do when you get it, don’t bother asking it (as my colleague Wedge Black once wisely put it “only measure what you mean to act upon”).
Many of the standard reports in Microsoft 365 or Google Analytics fall into this trap: that pretty global map showing where users come from probably isn’t going to change much and you can’t influence it unless you start a recruitment drive in a low-performing country.
Defining the slightly less big questions
Each Big Question in turn will have sub-questions that fill in the details – it is rare that a single data point will tell the whole story. For example:
Have we made our HR services more efficient for employees?
- How long does it take to complete a requested service online?
- What proportion of requests need a help-desk call to resolve?
- What proportion of requests are automated?
Often these second-tier questions will be a combination of tracking inputs and outcomes. In the examples above, we’re asking about things going into the system (how many are automated) as well as the desired effect (time taken to complete).
Where do we get the answer?
To build up a complete answer, we often need to combine data from an analytics package with other sources, such as pulse surveys and more qualitative input.
For example, if we ask about employee engagement with our strategy, there may be simple analytics such as how many people looked at news stories that talked about corporate strategy. But you also need to then do some evaluation: as a percentage of the target audience, how does it rate? And of course just opening a story doesn’t mean anything was absorbed, so periodically you may also need to do a pulse survey to confirm retention and sentiment.
Agree on KPIs and actions
OK so by now you have a story to tell – you can report on the performance of the ‘Big Questions’ and drill down on the ones where the answer is intriguing. But ultimately, we also need to agree in advance what good looks like, and what you want to do.
For this, we need to set some Key Performance Indicators (KPIs) and corresponding actions. Some examples of KPIs might be:
- Helpdesk calls reduced by 20%
- Employee satisfaction with search results increased by 10%
- 75% of employees successfully completed digital literacy training.
I’m often asked for benchmark figures such as “What’s a good adoption level for Yammer?” or “How often would you expect frontline workers like or comment on news stories?”. However, every organisation is different and, much like trying to benchmark your marriage, it can be a rocky path to follow. A better route is to take your own baseline, and then decide on what level of improvement is needed to be beneficial.
Actions in response might be about generating new areas of content where there are regular searches for topics not covered. Or you may detect that one business unit is collaborating less with people in other parts of the organization and engage in some specific discovery work to understand the barriers and see if some coaching is needed.
Don’t dash to a dashboard
For all the grand vision that a ‘Big Questions’ approach implies, there’s a trade-off between the cost of running a dashboard like this and the value it generates. I’ve seen some companies that run so many surveys and produce such detailed analysis that they never seem to have resources left to change anything. It’s OK to start with the easy-to-measure elements and then gradually refine.
Don’t be afraid too to ditch measures that never influence your approach. Whoever said “There’s no such thing as a stupid question” clearly wasn’t paying for the work involved in answering it!
This article was originally published over at CMSWire.