Behavior Driven Measurement

Behavior Driven Measurement involves defining target behaviors and then deriving measures that will measure those behaviors. “Gaming” such measures should only cause positive effects in the organization.

Poorly designed or implemented measures can have a detrimental effect on the health of the organization. The reason for this is Measurement Driven Behaviour where peverse behaviors emerge as individuals and teams attempt to make the measures look good. In HSD, we reverse this idea to focus on Behavior Driven Measurement.

A Dysfunctional Example of Measurement Driven Behavior

Measurement driven behavior is hard to avoid because, simply put, what you measure is often what people will do. An example of this causing problems can be found in the use of Service Level Agreements (SLAs) with support teams.

A support team was given an SLA target that all calls must be resolved within 5 minutes. The intention of this was to drive quick problem resolution and identify when the support engineers needed more training (based on the SLAs not being met).

Support engineers felt a pressure to perform to this SLA to the extent that when a support call came in and the customer had more than one question they were asked to call back (and queue again) rather than ask two (or more) questions to the same support engineer – so that the clock could be reset and the SLA wouldn’t be broken.

Clearly this is dysfunctional in terms of providing rapid effective support to customers and indeed it caused a certain amount of reputational damage.

Turning it around: Behavior Driven Measurement

In HSD we look at the desired behaviors first and think about measures that will indicate the desired behavior, be useful and that when “gamed” will result in good behaviors. “Gaming” a measure means behaving specifically to change the measurement, regardless of the aims of the true business function. Unfortunately, humans are prone to gaming, so instead of pretending it won’t happen we embrace it and use it to the advantage of everyone.

In the case of support SLAs we want to know if the customer receives good support and if the support engineers have enough training and knowledge to do that effectively. Neither are well measured by setting a time limit. Instead we can focus on value and knowledge. The best source of customer satisfaction data is the customer body, we can either ask them directly to feedback as part of their support engagement or we can randomly sample the customer base to get an understanding of their satisfaction.

In terms of knowing whether our support engineers have enough training and knowledge a time limit is a very poor measure as support calls are not likely to be about the same subject, typically they are about a diverse range of issues. The best source for this information is in fact the support engineers, simply asking them if they have enough information and training is a good way of understanding their needs. If all of our support engineers say they are under-skilled on a particular system, then that’s good evidence that training may be necessary.

If talking to people isn’t enough and we actually want to record some measures then we can use Lead and Cycle Time or other Workflow Metrics measures to track the lifetime of a problem being raised, then work to resolve it and finally get to the point of its resolution. These measures are useful for establishing the average amount of time that requests are in queue vs. being worked on. We often find that that the best way to resolve problems more quickly is to reduce the queue times (which tends to raise customer satisfaction) which can be done in many ways, such as providing automated solutions for common issues and adding more support engineers.

Cycle Time is the time it takes to actually do a piece of work. Lead time is the time it takes from the request being raised and the work being completed. Lead Time and Cycle Time are measures common in Lean implementations. Lead Time = Cycle Time + Queue Time.

Workflow Metrics are indicators of flow and health in an organization. Used to compliment direct engagement through the Go See practice they can help provide evidence for decisions and provide indicators of areas to investigate.