A Feature is not ‘DONE’ until it’s had IMPACT

Ant Murphy
5 min readNov 30, 2023

[Originally posted as part of my regular Newsletter here]

We’ve all been there. We’ve defined an MVP and moved a bunch of functionality to V2, but once we’ve launched we immediately moved on to the next thing.

V2 never comes.

Often referred to as feature factories or the build trap, many organizations find themselves shipping one feature after the next. Seldom do they cycle back to anything they’ve shipped previously to measure the outcome.

And it’s not any individual fault.

We often like to point at stakeholders and contest that “they don’t get it” or “they just want to build their shiny solution” but as Edward Deming would say:

“A bad system will beat a good person every time.” — Edward Deming

In these organizations it’s often the system that is working against you.

Incentives, performance reviews, management ideology, processes, etc are geared towards more output, not outcomes.

So how do you beat a ‘bad system’?

You seek to change the system, not the people.

Here’s a hack I’ve used several times now to influence the system = add a MEASURE column to your workflow before DONE.

Add a MEASURE column to your workflow before DONE.

If we can all agree that the outcome is what’s important, not the output (i.e. a feature) then we shouldn’t consider any item of work as ‘done’ until it’s created the desired outcome.

Therefore, nothing moves to DONE until we’ve measured the impact of that work.

You may be thinking, but that’s so simple, it would never work!

It does.

I’ve done this at a handful of companies now. All varying sizes and different industries, and it’s worked every time!

What you’re creating is a forcing function that acts as a reminder to cycle back and measure the outcome of previous work.

Now, this doesn’t need to be for literally everything. Of course, small tasks and changes might not need this rigor, but anything substantial definitely should.

At first, your MEASURE column will likely pile up. But that’s ok. At some point, the pile will begin to draw attention, even if that’s your attention, and it will act as a reminder to go and measure the impact.

It can also work well to visually show stakeholders. Again, their eyes will be drawn towards it, and they’ll remember, “oh yeah, that’s right. We made that change… I wonder how it’s going?”. Before you know it, that MEASURE column will be prompting them too.

Over time you’ll begin to show that this way of working can work. We can cycle back to things and measure their outcome. That managing outcomes, not only outputs, is possible.

This approach strongly aligns to one of my change principles; “Show, don’t tell”.

You can spend a lot of time and energy into trying to convince those around you we need to measure the outcomes or you can show them!

I’ve always had more success with the latter.


Once you’ve measured the outcome, now what?

Here’s a framework that I use to help determine my next cause of action for any new release or experiment.


Depending on the level of conclusive evidence and whether you’re achieving your desired outcome or not you essentially have 4 courses of action:



This is the typical course of action.

You have strong evidence to suggest that this change is giving the desired outcome (or surpassing it). Now it’s time to dial it up and scale the change out to more users. Hopefully you don’t release things to everyone at once — that’s risky — where possible we want to release to smaller groups and scale it up only when we’re confident that the new feature is valuable and driving the impact we’d hope it would.

If you’re not there, that’s ok, perhaps something to work towards. Adding the measure column and beginning to measure impact is the first step and will go a long way to get you here.


Pivot is when the evidence is telling you that the idea is not working but it’s not exactly conclusive.

You may have some indication or insight that a tweak or adjustment might be the cause of the mixed signals.

In these cases we would ideally want to make some adjustments to get more definitive results and hopefully be able to pivot it towards achieving the desired outcome. Starting again from scratch is always more costly so if there is a change that some tweaks might yield vastly different results then it’s generally worth trying.


Kill is like Pivot but the evidence is conclusive. There’s a strong indication that it’s not working and therefore it’s not worth pivoting. Instead we might be better off abandoning the idea and trying something else.


Finally we have Preserve. This is something I don’t see Product Managers and companies do enough.

You’re going to have times where the evidence won’t be clear but there will be some positive indication. This may be that the desired outcome is being achieved but it’s not conclusive that the change you introduced was the cause or perhaps we’ve only seen a small positive change but again nothing is conclusive.

In these situations, patience might be our best course of action. We wait it out, get more data and see before making premature decisions.

Need help with communication or other aspects of product management? I can help in 4 ways:

1) Level up your craft with self-paced deep dive courses on specific topics such as Prioritisation, Stakeholder Management and more.

2) Free templates and guides on Product Pathways.

3) 1:1 Coaching/Mentoring: I work with product people and founders through 1 hour virtual sessions where I help them overcome challenges.

4) Private Workshops and Training: I frequently run private workshops and tailored training courses for product teams globally. Get in touch to talk about your unique training needs.



Ant Murphy

Subscribe to ‘The PBL Newsletter’ for regular posts on Product, Business and Leadership 👉 https://www.antmurphy.me/newsletter