
I was working with a young company some years ago. They had some fairly typical growing pains - a lack of product direction, concern over what opportunities to prioritise, etc. Over a few weeks, we started to sort the key opportunities, get the teams moving in the right direction, and kept regularly experimenting to learn more. We were measuring progress, keeping active burndown charts and also getting regular feedback from customers. Things were moving in the right direction as far as I could tell.
But one of the execs was not happy.
I was called into a meeting to be told that he wasn’t happy with the ‘productivity’ of the team. I asked him what his concerns were.
‘I’ve been keeping track of when the engineers arrive at the office and none of them are ever here by 9:00 a.m.’
Clock-watching is not conducive to creating high-trust environments, I explained. I showed him the ways in which we were measuring progress, and how we could see that we were starting to solve the right problems for our customers. I’m not sure about how happy he was, but we never spoke about the time people arrived again.
‘Measure what matters’ is advice that you hear again and again in businesses of any size. Unfortunately, when it comes to software development, that often transmutes into “Measure what’s easy.” It’s easy to look at the time people arrive and leave. It’s easy to measure diffs or lines of code. Terms like ‘velocity’ have been abused to the extent that they have become meaningless, or feel like sticks to beat engineers with.
If you’re not thinking about ways you can reduce frictions that prevent engineers from succeeding faster, you’re probably going to end up measuring things that really don’t matter.
What matters to your business?
Outcomes matter. Customer experience matters.
Outputs are easier to measure.
Guess which one organisations usually focus on?
This isn’t to say that you shouldn’t measure outputs. Output measures are useful, but they need to be measuring progress toward outcomes. For example, if you’re trying to increase profit margins in an e-commerce store, you may want to measure number of experiments run (output) vs. profit margin per-transaction (outcome).
How do you decide what matters?
There are two key questions we should always ask, no matter what we are working on:
What outcome are we trying to achieve?
How can we tell whether we’ve succeeded?
If we can’t use actionable results or tangible customer benefits to answer these questions, then we are thinking in outputs, not outcomes. We can overcome this bias by framing hypotheses and expected results when considering potential areas for investment.
Don’t stop making sense
In their book Sense & Respond, Jeff Gothelf and Josh Seiden tell the story of how they worked with Taproot to deliver a new system for matching volunteers to organisations. Taproot wanted a list of features, but they eventually agreed to a contract where delivery was framed around outcomes. As a result, the team was able to build a prototype quickly and learn what features were actually required.
Some of the assumptions they had proved to be incorrect when tested. They saved significant time and effort not building unnecessary features. This enabled them to deliver a scalable solution that “far exceeded the performance goals written into the contract.”
The authors note that it’s unusual for a company to reach this sort of arrangement with an external service provider. It’s equally unusual within organisations that build their own software. Stakeholders often want tangible results in the form of features delivered so that they can demonstrate progress to the business.
Unfortunately, this doesn’t always translate into successfully solving customer problems.
OKRs to the rescue?
Objectives and Key Results (OKRs) enable organisations to frame their desired outcomes (objectives) and to state the measures or metrics they will use to assess the success of any initiative (the key results). Used well, OKRs can enhance the work of development teams, enabling them to take a data-informed approach. A key step in being successful in using OKRs or any other way of assessing organisational performance, is understanding what makes a good metric.
So, what makes a good metric?
In their book Lean Analytics, Alistair Croll and Benjamin Yoskovitz describe the key features of a good metric.
Comparability: it must demonstrate progress, e.g. a 5% increase in conversions since last month is more useful than 2% conversion
Understandability: a metric must be easily understandable by everyone in the organisation
Ratios: the most useful metrics are ratios rather than numerical.
Behavioural: a good ratio tracks customer behaviour but also will challenge the organisation’s behaviour
Organisational metrics must be based in customer outcomes and should be stated as ratios. It’s important to watch out for vanity metrics (e.g. number of clicks) which may make you feel good about yourself, but don’t offer any insight into customer behaviour or action you can take which will impact it.
Don’t be vain
Vanity metrics being emphasised as the highest-order problems detract from the mission of the company. A loss of perspective on who your customer is and what they need can lead to an organisation losing its way and ultimately failing, as happened to crowdsourcing innovation startup Quirky in 2015.
“If you measure something and it’s not attached to a goal, in turn changing your behaviour, you’re wasting your time” Alistair Croll and Benjamin Yoskovitz, Lean Analytics
One thing at a time
In the same book, Croll and Yoskovitz also introduce the concept of the One Metric That Matters (OMTM). The OMTM is the north star, the guiding light of the team. There are many things you can track. Recording and reviewing different measures is easier than ever with developer tooling and analytics software. But what really moves the needle?
What’s the one thing that you can measure that will tell you that you are on track to achieve your desired outcome?
For Spotify, the OMTM may be number of minutes listening to content.
I once worked on a product which had consistent traffic, but only a 9% conversion rate. In this case, conversion was considered to be a customer creating an account AND taking further action. By focusing on removing frictions in the account creation flow, the team increased the conversion rate to over 25% and it stayed there, even as the number of unique visitors didn’t change a great deal over time. If we had focused on increasing site visits, we would never have solved for adoption.
There can’t be only one?
There may not be only one metric that matters in all situations. For example in a two-sided marketplace, you’re likely to need a metric for each side of the market. Equally, there are other businesses that may need to consider more than one outcome per product area. Frameworks such as DORA and SPACE offer metrics and approaches where you can ensure you’re taking a balanced approach to measuring developer productivity.
There may not be only one metric that matters, but the number of important measures should be very low.
What about internal measures?
Nobody would argue that you shouldn’t also measure what happens in product development. There are key metrics you can use as an organisation that will give you insights into the health of your product development process. You absolutely should care about developer productivity, but checking the time engineers arrive at work rather than measuring the value they deliver is misguided. The metrics that matter are about what drives customer experience and actions we can take to improve it.
“Capture everything, but focus on what's important.” Alistair Croll and Benjamin Yoskovitz, Lean Analytics
Made to measure
If you want to get more considered around how you measure success in your organisation:
Step back and ask yourself what’s the most important customer behaviour you are trying to drive?
How can you measure progress toward that outcome?
What work would need to be done to enable that outcome or change in customer behaviour?
How could you measure that work (output)?
Start measuring the steps you are taking (output) and the effect they are having (outcome)
Don’t jump straight to instrumentation or automation. Learn what’s important to measure before worrying about how easy it is. You can measure manually to begin with and optimise data collection later in a lot of cases.
In summary
Your ability to focus on measuring the right things will help you lead your organisation in the right direction.