"Experience, Empiricism, Excellence"
Please share with your colleagues and friends

A Risk is any potential threat to success. 

Unexpected events happen frequently during knowledge work. Tracking ones we think might happen gives us an advantage because we can do things to try and avoid a risk, or roll out pre-prepared plans if they do happen. Risks may be technical, business, quality, related to resources or people, related to requirements, competitive, political and other aspects. 
 
We do not recommend exhaustive risk categorization, instead we recommend tagging risks with categories. This allows an organization to look at risks from different angles without presupposing the nature of risks, possibly leading to missing some.
Risks should be tracked by every team and at the business leadership level. Risks should always have mitigation plans associated as otherwise there is no point in tracking them. Mitigation Plans provide a thought through response that teams can implement when a risk becomes an issue.
 
Not all risks will be identified before they happen, so organizations need to be able to deal with unplanned events rapidly.
 

Many risks will be within a projects or programmes or other delivery team's sphere of influence and actions may be taken to remove or address these risks. Some risks will be beyond the immediate sphere of influence, these should be identified and escalated to Business Leaders via the Bubble Up process or some other escalation route.

Risks may be within the sphere of influence of the business and so mitigating action can be taken. However, some risks may not even be within the sphere influence of the business instead affected by external factors, it is still essential that these risks are identified and escalated as they may be important in influencing the strategic direction of the business.

Risk Impact

Risk Impact is the effect of a risk actually occurring, becoming an issue. Impact is best quantified in terms of resource usage (time/money/people) and/or effect on Business Value (POFL). The risk impact and perceived probability help us asses which risks to focus our efforts on. During Portfolio Selection, and Continuous Investment Review, Risk Impact may be useful in helping to choose an option to take forward from Business Cases and inform start/stop decisions. Risk Impact will also inform Build or Buy decisions.

This uncertainty represents schedule risk since estimates will be so inaccurate initially, at Business Case and Portfolio Request level through to early Programme estimates and Project Estimates.

Estimates

An Estimate is an approximate calculation or judgement of the value, number, quantity, or extent of something
In business, we often wish to estimate software complexity, effort and timescales to help with business planning and feed into start/stop decisions as part of Portfolio Selection and Continuous Investment Review. Estimates should always be presented with uncertainty indicators and are little more than guesses. Estimates should never be treated as accurate.
 
At the beginning of a piece of work we typically know very little about it. Over time as knowledge is gained uncertainty decreases. This is sometimes described as the "Cone of uncertainty" and research has shown that this level of uncertainty in the early parts of a project or programme means that estimates are often out by a factor of 4. Iterative and agile methods were in part designed to refine early estimates as work progresses, reducing the cone of uncertainty through feeding actuals back in to future estimates, although in some agile methods a focus on risk has been lost. Alternatively, waterfall approaches ignore the cone of uncertainty and assume that early estimates (before anything is really known about the project) are valid – despite decades of evidence that this is wrong.
Cone of Uncertainty
 
High complexity systems of systems, or highly speculative inventive work both have even more estimation variance to the extent that estimates become meaningless. All software is unique, otherwise we shouldn’t be building it. Too often organizations treat early estimates, which are little more than guesses, as facts. Providing Dilbert with reams of material, estimates are often treated, extremely dysfunctionally, as deadlines. Adding false accuracy to estimation by following estimation processes, statistical models and using tools actually makes the problem worse by implying accuracy that simply doesn’t exist.
 
A common dysfunction in organizations which adopt iterative or agile approaches at team level is a failure to feedback refined estimates as knowledge is gained and risks occur or are mitigated. As estimates are refined the value proposition for a Portfolio Request or Business Case option changes and so a project that initially seemed like a valid investment may become an invalid investment, or its Build or Buy decision might be invalidated as risks are uncovered. Continuous Investment Review, part of Governance in Holistic Software Development provides a mechanism to monitor, and react to these changes. The refinement of estimates must be built in to any relevant Software Development Contracts to avoid derailing Product Delivery.
 
We recommend that estimated are always accompanied by an indication of uncertainty such as:
  • Confident:  We understand the process, human and technical aspects. We have good numerical assessments based on past evidence. Although we can’t predict the future, we are confident in our estimates.
  • Reasonable: We have some confidence in our analysis, but we can expect numbers to change as we learn more. We don’t expect our estimates to change significantly enough to affect start/stop decisions.
  • Cautious: Although we don’t expect significant change, there is sufficient risk that our estimates could change significantly as we learn more about the work.
  • Uncertain: We have little confidence in our estimates as there is high complexity, or significant unknowns. Our estimates may change significantly as we learn more.
In every case, we recommend iteration and re-estimation.
 
Our certainty scale is based on the work of Sir David John Spiegelhalter, a British statistician and Winton Professor of the Public Understanding of Risk in the Statistical Laboratory at the University of Cambridge.
 
We recommend that Leaders substitute the word "guess" for the word "estimate" when thinking about software, and accept change in estimates. 
 
#NoEstimates
 
Estimates should not be demanded as a matter of course, only ask for estimates when they will inform decision making. The act of estimating has no value in itself. High level estimates are necessary to justify investment decisions but low level estimates, especially around individual tasks are often useless.
 
 
 
 

 

Risk Driven Lifecycle

We recommend adopting a risk driven lifecycle, feeding back refined estimates and risk knowledge throughout the business. A risk driven lifecycle involves doing the risky things first, to understand those risks and inform continued work on the programme or project. Essentially reducing the cone of uncertainty as quickly as possible. The Standard Milestones in Holistic Software Development build in the Risk Driven Lifecycle to programmes and projects where they exist, but a risk-driven approach can be taken without using Milestones by simply attacking risky things first.

A Milestone is a significant event or decision point in the lifetime of a project.

 

Holistic Software Development uses a standard set of layered Milestones to drive the Risk-Driven Lifecycle. Milestones are a useful way of tracking top-level guesses about when scope will be delivered. Like estimates, they should not be treated as deadlines.

These risks may relate to requirements or technical uncertainty which are within the projects sphere of influence. Delivery teams can identify activity that can reduce or eliminate the top risks, this may involve building technical solutions to address specific technical challenges (prototyping or spiking). The result of early risk mitigation is increased understanding of overall scope, proven architecture (from spiking) and elaboration of poorly understood requirements all of which reduce uncertainty and improve estimates (where they are used).

Successful risk reduction may lead the business to commitment of more fund/resources to the project, conversely inability to reduce risk may be a signal that a different approach is required. This may even result in project cancellation or redirection, early cancellation of projects that aren't addressing their risks successfully is a positive indicator for the organization. HSD uses Continuous Investment Review to drive this analysis.

Programmes or Projects whose risk profile is static are also candidates for investigation as there may be issues around the ways of working.

 

Risk Mitigation

Risks can be mitigated in a number of ways:

  • Avoidance - the risk is eliminated by not doing activities that might cause it to occur, or it goes away by itself (this is often impractical, though most desirable, in a business context)
  • Reduction - the risk impact is reduced by resolving parts of the risk (e.g. by sharing knowledge, by finding expertise, by spiking a technical problem, by demonstrating progress etc.)
  • Sharing - sharing is often misrepresented as "transfer" however a business can never entirely transfer a risk (e.g. by outsourcing) as the impact of the risk occurring will still affect the business as a customer to the transferred entity. However, some types of risk impact, such as financial and reputational, can be shared.
  • Retention - involves budgeting and planning for a risk to happen, to the extent of assuming it will happen.

Please share this page

Submit to DeliciousSubmit to DiggSubmit to FacebookSubmit to Google PlusSubmit to StumbleuponSubmit to TwitterSubmit to LinkedIn