Continuous Investment Review

An important part of the Governance function is “Continuous Investment Review”. Continuous Investment Review is the process of checking work progress (projects, programmes and other delivery methods) against forecast budgets and extrapolated costs to decide if the business should continue to fund a project.

This continue, change or stop decision is a chance to apply feedback to work based on actual output, spend so far and projected spend.

Benefit (ROI) = Business Value – Cost of Implementation

Waiting until the end of a piece of work to judge whether it was worth it or not places value at the end of the lifecycle. Business Value is the most important thing, so checking it regularly can prevent organizations from making significant mistakes.

One of the most valuable decisions a software business can make is to stop a failing project.
Failing projects are a learning experience for everyone involved and can happen even to the most perfect development team. Estimates are only guesses, and software development is hard, and frequently complex. Early failure is success. Late failure is to be prevented.
Although early failure should be encouraged, and even incentivized it must not be more attractive than project success otherwise the organization is tuned to perpetual failure. Incentives for delivering Business Value must out-weigh early failure incentives. The number of projects stopped early should be considered a positive measure. The amount of variance from forecasts that triggers exception reporting should be calibrated according to the organizations’ financial risk appetite. Note that project estimates are expected to vary significantly at the early stages of work until the portfolio, programme or project has been de-risked – even then some variance is normal.
Continuous Investment Review should be transparent and public. By using a common Business Value model (POFL) decision-making around stopping a piece of work, or changing its funding, or moving it in terms of the Commoditization Scale (and therefore the Hybrid Dynamic Model for ways of working and organization) becomes relatively easy.

Return on Investment (ROI) is based on a simple formula:

Return on Investment = Business Value – Cost of Implementation

Where business value is qualified in terms of the POFL tests and cost of implementation is the cost of the product (and/or maintenance agreements) as well as the internal Delivery cost of deployment and support (as well as and configuration/customization costs). HSD helps organizations determine cost of implmentation by illuminating all of the work related to a Portfolio Request.

Where business value is not defined financially (e.g. in non-commercial organizations) we end up with a simple “Business Value vs. Cost of Implementation” equation which offers a simple decision to Business Leaders: Is the benefit worth the cost? We recommend that a wide stakeholder group is used to answer that question rather than just the Business Leaders so that decisions have wide support as they are made, rather than trying to gain support afterwards, especially when those decisions relate to tools, environments or COTS selection.

Similar to relegation tables in sporting leagues we recommend transparently publishing Benefit (ROI) data, cost, forecast and current funding decisions for each piece of work. We recommend literally publishing a league table of pieces of work and their value – cost – progress. The top performers are safe, the lowest performs can be considered at risk in the “relegation zone”. When “relegation” comes, in the form of stopping or reducing funding, no one will be surprised even if the news is disappointing. This process also has the added benefit of gamifying project performance, adding a competitive pressure for teams to regularly demonstrate progress towards new/improved business value.
The following is an example Continuous Investment Review “Relegation table”. Note that this table shows the Business Value indicators and answers the Big 4 governance questions. In terms of financials, we track how much has been spent, how much more needs to be spent and so the total estimated cost. We’ve included the traditional initialisms for these in the table below but only recommend using them if you’re already familiar with them.
Using this example table (based on real information from an anonymized client) below we can make decisions regarding each investment. We include the narrative explaining the table below:
Proj 1” is a bit of work aimed at feeling good and looking good. We know it’s a marketing led piece of work that will improve our reputation with partners and make the workforce happier. It’s overspending a little but not significantly (63% budget for 60% scope) and is pretty healthy. It’s not focused on problems or opportunities but it’s something we want to do – Continue.
Big Programme” is doing some good work against solving the problem it’s aimed at, but isn’t hitting it’s “feeling good” value indicators. We may need to look into that. It’s overspending again, a bit more significantly this time, and it’s a pretty expensive project. Talking to people we know that this is because it’s a high complexity programme. It’s healthy – Continue.
Quick Hack 5” is about seizing a new opportunity. It’s cheap and cheerful and the team are doing a great job, this experiment looks like it’s working. It’s even over-delivering in terms of scope for budget but it’s so speculative we always knew scope was pretty variable. The team need to celebrate their success a bit more, and tighten up a little on their self-improvement. It’s a no-brainer – Continue.
Experiment 2” is not doing too well against its problem although it’s getting some good progress against new opportunities. It’s overspending slightly, but not significantly. Because it’s working practices aren’t looking too healthy and the progress against the problem isn’t great it’s at risk due to being relatively expensive. This might not work – At Risk.
Complex Prog 2” – This was always going to be difficult, it’s very complex. This is a big expensive programme, and it’s spent quite a lot already (20% budget – 6!), but not achieved much in terms of scope delivery (15%) and isn’t doing well at solving its problem, or new opportunity. It’s not hitting the feel good and look good elements very well either. Recent investigation shows a lot of cross-team problems, probably due to the difficulty and lack of progress. This has been a really valuable piece of work, that’s allowed us to fail fast and learn some important lessons. Thanks to everyone involved, the time has now come to sop this Prog. We’ll look to reinvent another solution to problem, ideas are welcome at the Fast Fail party – Stop.
Can you be so honest about your projects and programmes? If not, why not? Good governance is based on honesty.
During development there is sometimes a lag between early releases and meaningful business value – this situation requires business leaders to make subjective judgements and should not be avoided. Unconfirmed business value is more likely in speculative inventive work on the Commoditization scale. Leaders are empowered to make investment decisions, doing so publicly means their personal credibility will be based on their decision making incentivizing them to make good, well communicated, decisions. If your Leaders are too scared to make public investment decisions, then you have the wrong leaders.
The input to Continuous Investment Review is initially Portfolio Requests and/or Business Cases. Once pieces of work are in execution then they will need to interface to this process by providing the data on a continuous basis. We recommend a monthly report to the entire organization as above, perhaps on internal social media platforms.
Continuous Investment Review is especially important in governing Software Engineering Contracts.

In a software context a contract is a written or spoken agreement that is intended to be enforceable by law between a customer and a supplier on how a product or part of the software development process will be delivered.