So what, now what?
A simple test for actionable metrics
Actionability speaks louder than words
Not so long ago, I attended a talk where the presenter was extolling some new workflows he’d introduced at his organisation. He took the audience through the reports that he’d created, using the dreaded phrase “if you can’t measure it, you can’t manage it.”
As he presented his data, it was clear that he prized surfacing metrics, but I couldn’t tell how he was using these to identify issues or track improvements. For example, there was data on the number of times the process was used, but little on the completion rate, or where frictions could be seen in the user experience.
Now, to be fair, content of external presentations like this are often constrained by company policy. Some companies are more open than others on sharing insights into their internal workings. Rather than critiquing this individual presenter, I want to focus on why actionability in metrics matters.
All is not vanity
In Lean Analytics,
and write thatIf you have a piece of data on which you cannot act, it is a vanity metric.
Vanity metrics make people feel better about themselves, but don’t generate any insight. For example, a marketing team can celebrate a campaign by saying “We sent 50,000 e-mails last month!” That’s great, but it doesn’t tell you anything about how many people engaged with the content, took a desired action, or unsubscribed from future communications, all of which are data you could actually learn from.
I don’t necessarily agree that an unactionable metric is a vanity metric. I think it can be a symptom of the over-emphasis on measurement that prevails in many areas today; the “if you can't measure it, you can't manage it” trap that I wrote about in my last article.
I think it’s important to call out bad data, and I’m not sure that disparaging all bad data as vanity metrics helps. It’s better to focus on actionability than labels. Data is for decision-making. If you have a piece of data and it’s not clear how it would change your behaviour, set vanity aside (if it exists), and ignore it.
How can we tell?
“So what, now what?” is a helpful mantra for identifying whether you’re dealing with good, actionable data, or less useful observations. “So what?” asks why you should care about the metric, and “Now what?” asks what action you’re going to take based on that insight. Let’s dive into how the mantra helps identify usable and valuable metrics.
So what?
The first question to ask yourself when presented with any data is “So what?” What does this mean? What can I do with this information? Does it tell me anything useful?
The first time an Engineering Manager told me the number of story points his team had completed, this was my reaction. So what? I have no insight into whether 27 story points is good or bad performance. Now, I’m not convinced that story points are useful in any way, though I recognise some teams seem to find them valuable in sprint planning. Reporting on completed story points seems unactionable to me.
I worked with that manager to introduce Sprint Churn and Sprint Delivery Ratio as metrics for his team. Now, they could review how much of the proposed work was delivered during the sprint, and whether work was added or removed. The “so what?” here became obvious very quickly - the team could see how poor refinement led to unplanned work being added mid-sprint. The team felt like they were constantly interrupted, with little ability to think strategically.
What good looks like
Good answers to “So what?” reveal clear insights about performance or problems.
Now what?
The second part of the mantra: “Now what?” follows quickly. In the sprint churn/sprint delivery ratio example, the team started to improve their refinement processes and hold themselves accountable for ensuring that work was ready for development before the sprint started. They became more resistant to work being added mid-sprint. As a result, the team became more resilient, and we saw substantial improvement in their stability and throughput. Over time, they were able to start thinking in outcomes, and to drive toward more strategic goals.
This pattern repeats across domains. A sales team tracking calls made per day is unlikely to generate any insights into the effectiveness of their processes. By switching to something actionable, such as conversion rate, they can start to understand the quality issues that prevent them from closing more deals.
“Now what?” is how you identify the action that you can take based on the metric you’re using. If you can’t tell how this insight can lead to a change in behaviour that may influence it, it is not actionable.
What good looks like
Good answers to “Now what?” point to specific, feasible changes you can make that should move the metric. This should be time-boxed so it’s clear that your changes are having the desired effect.
So, now…
There is lots of information online about what makes a good metric.
It should be SMART (Specific, Measurable, Actionable, Relevant, Time-boxed).
It should be a ratio rather than a number - ratios enable benchmarking and contextualise the relationships between different data
But I think you can distill all of this guidance with two simple questions. So what? Now what?
The pattern is always the same: good metrics surface specific insights, and point to concrete actions you can take immediately.
Next time someone presents you with data, ask these two questions. If you can't answer both clearly, you're looking at noise, not signal.



