There’s hidden treasure in tracking and capturing business benefits from change but it is not always easy and often neglected
I’ve tried to resist the temptation to start this piece with a gruff and throaty “Aye, me hearties!”, the topic being about searching for hidden treasure, or rather more prosaically, benefits realization.
You would have thought that after all the hard work running a project and implementing change people would be keen to make sure the promised benefits are actually banked. Sadly, no, this is often neglected and does not happen. There are probably a number of reasons:
- it’s not in the “plan” / budget, and nobody is on the hook for it…
- tracking benefits can be hard and requires discipline to stick with it as things plays out over a longer game…
- implementing actual change means changing some ingrained behaviours, and it is easier to not to look…
- there was more promise than reality in the proposition and nobody wants to get called out…
- nobody wants their pet project to be starved of funding so don’t want anybody looking too closely…
- tidying up is boring and not glamorous…
- “not my job”…
- the project objectives were to deliver a load of features (what), and benefits (why) weren’t clearly defined…
- can’t measure / see the effect of the changes made…
To get in context and do some setup for the topic, we can define three levels of business, like this…
The Strategy level guides the direction of the business; Business Change actually makes changes to the business; and the BAU operations layer gets on with actually doing the business, in between being guided and changed. To see where the delivery of benefits actually needs to get tracked, we need to project that out over the “Think-Build-Run” lifecycle of change and operation, thus…
You can see that “X” marks the spot for where this should happen in the top right hand corner in the intersection where Strategy meets Run, bubbling up from the monitoring of impact in the business change level. Depending on how thin the oxygen is at that exalted level, Strategy types might think they do not have to get their hands dirty, but in business and battle planning terms, it makes a lot of sense to check out whether you are actually winning or just staggering along from one non-event to the next.
In an ideal world, the benefits tracking would have a gimlet-eyed, CSI-like forensic analysis of the results of the strategy as it plays out, like this…
…and, indeed sometimes it can be like that, where the change is intimately linked to the business performance. Things like company or product revenue, or operating costs are usually good things to look at, although sometime you need proxy measures and targets for the effect of the change itself (like, conversion rates) which do then track into actual business performance changes.
Just a quick anecdote from times past on when that link is strong, I had a discussion with an EVP of a global financial services company during a technology investment portfolio prioritization programme. We were discussing him providing a proforma business case and NPV for a project to futz with the FX rates on some of their cards. He just said "I'm not doing that, I know we'll make sh*tloads of money", so that was that...the project featured high on the priority list
But, as often as not, tracking the impact is more like looking into a murky fog through the wrong end of a telescope…
…and seeing meaningless confusion (or a donkey).
This is a frequent problem for parts of an organization that are not well connected to customers and revenue, or where the effect is unclear, e.g., for developing enabling infrastructure, where direct business benefits cannot be linked to the change (Hint: try to get the business front-end on the hook for some actual business benefit for changes like that, makes the NPV positive, and the whole change becomes more meaningful from a business perspective)
The challenge often derives from the measurability of the impact on performance brought about by the change (if that was actually defined, a different challenge, of course). To measure that you need to look at the difference in some metric that should be impacted, which is relatively easy when you have a clear baseline before and can measure the performance after the change, or you can test the effect in the present by parallel testing of systems with and without the feature, like A/B testing for conversion rates with different customer journeys / click-flows on a web-site of mobile app. Those are the first two cases like this…
…however the breakdown occurs in the second two cases where, either:
- there is no “before” baseline, obviously not good, but can be fixable or perhaps reference external experience to determine the likely impact; or,
- worse, the comparison would be between either of two possible future paths. This final case is the most difficult as you would (in theory) have to compare performance between:
- A – How we would have performed in the “Path Not taken”; compared with
- B – How we actually perform on the path we have taken
The case of the “Path Not Taken” is quite typical of commercial changes to the technology development process itself, something of a meta-topic perhaps (changing the change process…). Technology development has a huge discretionary element and there are many ways to waste money with it, so an important and essential question like “Are we doing technology development more effectively and efficiently now?” requires discerning analysis and thought.
However, to start with the basics across the broad spectrum of change, you need to set things up for success. One of the foundations is to start thinking in terms of outcomes and handing out the investment funds against outcomes rather than budgets by department/cost center or whatever. Delivering outcomes is typically multi-functional and cuts across organizational lines in order. There are a number of key elements to the recipe which you can see below.
Having measurable outcomes is fundamental to success in having real benefits to track, and the level at which they are defined sets the scope and breadth of their impact across the business. The higher up the performance hierarchy the more likely they will directly impact the fundamental performance of the business…
Outcomes need to be specified properly at the start of the any change journey, something like this:
- Aim is to define the achievement of a specific improvement brought about by a series of coordinated actions in near term scope, say, up to 3 months out;
- Outcomes should generally be SMART (Specific, Measurable, Actionable, Relevant, Timebound), or whatever your version of this acronym happens to be, “measurable” is not negotiable though. They are specific, reasonably sized/feasible incremental beneficial results we are looking to achieve which contribute to the higher level business goals;
- They need to be focused on specific improvement, so wording needs to be stricter in definition, and should be in this syntax: “<Improvement verb> <some Attribute(s)> of <some Thing(s)> by <some Target Measure(s)>”, using
- <Improvement verbs> like: Improve, Streamline, Optimise, Tune, Reduce, Accelerate (but not Do, Create, Assess, Evaluate, Analyse, Synthesise, Perform, Enumerate, Distribute, Communicate &c – these are “doing” action words which are part of the “How”)
- <Attributes> like: Quality, Accuracy, Effectiveness, Awareness, Speed, Timeliness, “Fit”, Cost, Usability, Accessibility, Reliability…
- <Things> are whatever entities of which we need to improve the attributes
- Specific <Target measure(s)> of success (percentage, absolute value, etc.) of the metric (or metrics) meaningful for the improvement of the Attributes
There are many ways to define improvements, here are a few examples…
When it comes to setting up the processes to track benefits, then we can slightly redraw the business level-lifecycle continuum from figure 2, and pick out the feedback loops…
The outermost Strategy & Investments loop (loop 1) is all about investing in the right things and typically runs on a quarterly cycle. The Business Change inner loops are all about prioritizing work to deliver the right features (loop 2) and delivering quality code (loop 3) which run bi-weekly and greater than daily frequency (or multi-quotidian, if you want to look that up in the dictionary). The Operations continuous improvement loop (loop 4) feeds upwards into the higher levels.
You can translate that conceptual model into an actual time-scape which integrates the loops, so that you have a rational approach to developing changes needed with quality delivery and proper tracking of the benefits to support regular investment review to accelerate or throttle funding…
The features of this particular time-scape are:
- Regular major releases of functionality, with content prioritized according to business need, maybe from outputs from a number of sprints
- Periodic feedback from the “market” (however that is defined), both on features and benefits delivery, into investment review
- Opportunity to reprioritize deliveries to market to focus on higher value features/service elements
- Opportunity to stop spending on a individual stream at any point
Obviously you can roll your own version, and also rework it for more generic business applicability. So…
(gruff and throaty) Aye me hearties, there’s treasure to be had!