An Estimate is an approximate calculation or judgement of the value, number, quantity, or extent of something
In business, we often wish to estimate software complexity, effort and timescales to help with business planning and feed into start/stop decisions as part of Portfolio Selection and Continuous Investment Review. Estimates should always be presented with uncertainty indicators and are little more than guesses. Estimates should never be treated as accurate.
At the beginning of a piece of work we typically know very little about it. Over time as knowledge is gained uncertainty decreases. This is sometimes described as the “Cone of uncertainty” and research has shown that this level of uncertainty in the early parts of a project or programme means that estimates are often out by a factor of 4. Iterative and agile methods were in part designed to refine early estimates as work progresses, reducing the cone of uncertainty through feeding actuals back in to future estimates, although in some agile methods a focus on risk has been lost. Alternatively, waterfall approaches ignore the cone of uncertainty and assume that early estimates (before anything is really known about the project) are valid – despite decades of evidence that this is wrong.
High complexity systems of systems, or highly speculative inventive work both have even more estimation variance to the extent that estimates become meaningless. All software is unique, otherwise we shouldn’t be building it. Too often organizations treat early estimates, which are little more than guesses, as facts. Providing Dilbert with reams of material, estimates are often treated, extremely dysfunctionally, as deadlines. Adding false accuracy to estimation by following estimation processes, statistical models and using tools actually makes the problem worse by implying accuracy that simply doesn’t exist.
A common dysfunction in organizations which adopt iterative or agile approaches at team level is a failure to feedback refined estimates as knowledge is gained and risks occur or are mitigated. As estimates are refined the value proposition for a Portfolio Request or Business Case option changes and so a project that initially seemed like a valid investment may become an invalid investment, or its Build or Buy decision might be invalidated as risks are uncovered. Continuous Investment Review, part of Governance in Holistic Software Development provides a mechanism to monitor, and react to these changes. The refinement of estimates must be built in to any relevant Software Development Contracts to avoid derailing Product Delivery.
We recommend that estimated are always accompanied by an indication of uncertainty such as:
  • Confident:  We understand the process, human and technical aspects. We have good numerical assessments based on past evidence. Although we can’t predict the future, we are confident in our estimates.
  • Reasonable: We have some confidence in our analysis, but we can expect numbers to change as we learn more. We don’t expect our estimates to change significantly enough to affect start/stop decisions.
  • Cautious: Although we don’t expect significant change, there is sufficient risk that our estimates could change significantly as we learn more about the work.
  • Uncertain: We have little confidence in our estimates as there is high complexity, or significant unknowns. Our estimates may change significantly as we learn more.
In every case, we recommend iteration and re-estimation.
Our certainty scale is based on the work of Sir David John Spiegelhalter, a British statistician and Winton Professor of the Public Understanding of Risk in the Statistical Laboratory at the University of Cambridge.
We recommend that Leaders substitute the word “guess” for the word “estimate” when thinking about software, and accept change in estimates.
Estimates should not be demanded as a matter of course, only ask for estimates when they will inform decision making. The act of estimating has no value in itself. High level estimates are necessary to justify investment decisions but low level estimates, especially around individual tasks are often useless.