"Experience, Empiricism, Excellence"
HSD is free, please donate to help support us
or

The Metrics and Reporting View looks at the various feedback cycles, metrics and reports in Holistic Software Development.

We often see organizations wanting to measure their processes or process adoption. To that end organizations engage in agile maturity models and assessments. In HSD, we consider these to be third order metrics.

The most important thing is Business Value.

  • First order: The most meaningful measure of progress is working software.
  • Second order: Measurement of intermediary artifacts such as plans and designs.
  • Third order: Measurement of the processes uses to create the intermediary artifacts.

The motivation behind measuring process often comes from the desire to see a return on investment in process improvement. However, process rollouts that involve forcing everyone to standardize their working practices and behaviors are a bad idea. Knowing that people are following a process does nothing to tell us whether their output and collaboration is any good.

Return on Investment for processes is tied to the value proposition of the process. For HSD, in our Introduction we said that HSD makes your organization healthier, faster, cheaper and happier. So for HSD adoption, we measure those things – however those are organizational metrics, not process metrics. Adopting HSD helps you get a handle on your organization and gives you the tools to improve it.

We strongly discourage measuring third order process issues such as numbers of teams doing standups, number of Product Forum meetings, spikes done etc. Teams need the freedom to customize their process because one size does not fit all.

We’ve seen a number of generations of process maturity assessments, and been able to compare them against our mined project data set. We’ll explore one of them here from a large client of 1000s of team members:

Agile Maturity Model #1

A set of questions that teams were asked that were aimed at comparing them against a number of “habits of successful teams”. The questions roughly lined up to asking people about their practices and rating their execution on a numerical scale that ranged from 0 – not doing this, to 4 – good enough. A number of variations of this assessment were done over a number of years including mentors making the assessment, mentors and team leads co-assessing to mentor facilitated group assessments.

There were a number of problems with this model:

  • Iteration was assumed by the questions to the extent that continuous flow teams would score badly.
  • A score of 0 was ambiguous, was the practice not being done by choice (a mature behavior) or not being done due to a lack of knowledge or ability (a low maturity behavior)?
  • Is “good enough” really the best a team could be? What about striving for perfection or continuous improvement?
  • A number of the questions looked for the existence of intermediary artifacts.

Analysis of the results of this assessment model, in all of its evolutions against the project data showed that the worst performing teams (in terms of Lead/Cycle time, throughput and quality metrics) got the highest process scores. In contrast teams with repeatedly low process scores were high performing in terms of workflow metrics.

We investigated further by taking a sampling of the projects and investigating them in detail. We found that the following held true:

High Agile Maturity score

Poor workflow metrics

High Agile Maturity scores were caused by teams being unaware of where they could improve, they were unconsciously incompetent.

Unsuccessful projects

Low Agile Maturity score

Good workflow metrics

Low Agile Maturity scores were caused by teams being mature enough to recognize where they could improve.

Successful projects

 

So the Agile Maturity Model negatively correlated with real team maturity and project success! In every case we would have picked teams from the second category if we were building a new product. Needless to say, as a result of our investigation the measurement model was stopped.

Some elements of process can be usefully measured but we recommend taking an extremely light touch to such things. Ideally development communities should be encouraged to examine the must-haves themselves. Here’s our starting set:

  • Release Frequency – a frequency of over 3 months sets off alarm bells, continuous or 1 month is best.
  • Build automation – don’t have an automated build process? Get one.
  • Version control – are your code assets stored and resilient? If not, why not.

Please share this page

Submit to DeliciousSubmit to DiggSubmit to FacebookSubmit to Google PlusSubmit to StumbleuponSubmit to TwitterSubmit to LinkedIn