Don't get faster at building the wrong thing
Why time-to-qualification beats time-to-market
The creation of the DORA metrics, with their emphasis on system stability and throughput, enabled software developers to measure system health in scientifically valid ways. When your system delivers an increased number of deployments, with smaller lead times and fewer bugs, your engineers are following good practice.
But what if the needle doesn’t move on any of your customer metrics? Revenue stays static. Customer satisfaction trends down and retention falls. You’re building faster than ever, but you’re not delivering for your customer. You’re delivering more features faster than ever before, but you’re shedding customers anyway.
Every successful startup survives because it finds product-market fit before it runs out of cash. Most learn through experimentation and failure before they find their niche.
As organisations grow, the need to experiment is often forgotten. As the cost of scale manifests itself in more process and co-ordination requirements, time-to-market becomes one of the key concerns of leadership. This manifests in phrases like ‘we need to go faster’ or ‘behave more like a startup.’ But startups don’t behave in the way they describe. Startups understand the race they’re in before they floor the accelerator.
Instead of measuring time-to-market, it’s better to look at time-to-qualification (TTQ). We shouldn’t assume that every idea is going to be a success. We should be listening to and be respectful of user and customer feedback. Looking exclusively at TTM can mean the organisation focuses on internal goals (what matters to it) rather than user value (what matters to its customers). A once-thriving experimental organisation becomes a feature factory. Product bloat and enshittification follows.
WTF is TTQ?
Time-to-qualification is the time it takes to qualify an idea. You don’t need to build the complete solution. You only need to build something big enough to learn if there’s an appetite for the solution. If there is, that’s when you invest in a scalable, reliable solution for the broader market.
Examples of qualification strategies include:
A/B testing: testing two versions of a solution and seeing which gathers more positive feedback.
Fake front door: gathering signatures or emails that express interest in a solution you haven’t launched yet.
Wizard of Oz: building a manual process that looks to the user like it is fully automated
Each of these require little up-front technical investment, and should be focused on a narrow question, based on your product strategy. For example:
Will a customer pay (more) for this product capability?
Does this open opportunities in a new market segment?
Does this drive traffic to a value-creating part of my site?
These are the kinds of activities that take place in the Potential phase of the PandA framework. It’s important to recognise that not all ideas are winners. A focus on time-to-qualification makes sure that the losers fail fast.
Let’s explore a fictional example of how this works in practice.
A worked example
With the market seemingly demanding AI features everywhere, the team at P&A Utilities consider building an AI-powered energy forecasting dashboard that predicts customer usage patterns 90 days out. Their hypothesis is that customers will pay for this service and it will incentivise more detailed forecasting. Sales forecast a €2 million increase in subscriptions over the following 12 months.
The team didn’t have much machine learning experience, and were concerned that they could get derailed by the volume of work and learning they would have to do to support this new capability. Instead of diving straight into six months of work, they decided to qualify the opportunity, facing down pressure from Sales and senior stakeholders to get on with delivering as soon as possible.
The team ran a Wizard of Oz experiment. They manually created 90-day forecasts for five clients using existing data and a spreadsheet, packaging it as a ‘beta preview’ of an upcoming premium feature.
After four weeks, usage data showed clients looked at the forecasts once, then ignored them. Customer interviews revealed that clients found 90-day forecasts interesting but not actionable. What they actually needed were 14-day forecasts with specific recommendations on when to shift high-demand operations to off-peak hours, or when the weather was likely to drive market prices higher within-day, so they could shift to onsite generation.
The team killed their original plan and pivoted to building short-term actionable recommendations instead. Instead of wasting six months building the wrong thing, the team qualified the idea in a month, then built the right solution in another eight weeks. The company got to market faster with a valuable feature and saved money and time in engineering costs.
Learning at speed
Time-to-market devalues learning. It assumes that whatever the organisation has decided to build will deliver value. There’s no need for product research or for experimentation if all you care about is getting to market quickly.
This emphasis on rapid delivery can result in taking shortcuts on quality. In a world of date-driven development, important investments in scalability and testing are postponed, becoming technical debt. Servicing this debt erodes the team’s ability to deliver quickly. It’s ironic that a focus on rapid development can turn into the very thing that prevents quick delivery over time.
Time-to-qualification flips this on its head. You can still look to optimise processes and reduce waste. But instead of emphasising faster delivery, you’re optimising for solving the right problems for your customers and users. Not pursuing unpromising capabilities is rewarded, both through a lack of technical debt, and in a keen focus on user value.
So why don’t more companies adopt this approach? There’s an automatic reaction that anything other than a focus on TTM will slow them down.
Scream if you wanna go faster
I was once talking about this need for experimentation and learning with a VP of Engineering, and he said “I can’t go to the C-suite and say to them that we have to slow down to speed up. I’ll be laughed out of the room.’
This is a fundamental misreading of what emphasising TTQ does. You’re not slowing down to speed up. You’re optimising for learning. You are truly behaving like a startup, delivering only the capabilities your customers care about.
This is analogous to what good Product Managers do all the time. You don’t build what the customer asks for. You build the thing that will solve their problem.
Time to market should be a follow-on from time to qualification. If an experiment shows promising results, then being able to capitalise on that quickly is a key determinant of company success.
You need to be fast. You need to outrun the competition. But that only works if you’re in the right race. The most efficient product is the one that doesn’t have unnecessary capabilities clogging up its feature set, obscuring its core value. If you join a race without qualifying first, how do you know if you’re in Formula One or the Demolition Derby?






