Measuring Metrics part 2
Blog - Measuring Metrics part 2
Dr Mary Wyatt | Published: March 18, 2016
Last time we discussed why metrics are often considered problematic by the people doing the work that the metrics are intended to measure.
So where does this leave us? There are three kinds of metrics that make sense: measures of purpose-driven activity, measures of effectiveness and measures of unintended consequences.
Measures of purpose driven activity look first at what we want to achieve. Suppose we decide we want to focus on client satisfaction, and we believe that responsiveness to enquiries is a key factor.
That sort of goal requires us to measure more than just "activity" (what is sometimes called "throughput") – in this case, the number of case actions processed. Purpose driven activity requires us to focus on what is desired and measure it directly. In this case we might measure average time to give a response that resolves the concern that caused the contact in the first instance.
The trick here is to really know what you want to measure. If you only measure average time to respond then you may not improve the quality of the response. This kind of metric requires more information to be collected – was the concern resolved, as well as how long did it take.
The second sort of measure is effectiveness. It is often mistaken for outcome measurement, because they look similar. Once again, the selection of the thing measured is the key.
When we measure effectiveness, we want to look only at those things that are not susceptible to the influence of others that we can't control. When others are impacting the outcome, we can never know what is due to our work, and what is due to their impact.
Instead of measuring return to work, we might measure work readiness. That leaves out the influences of the job market or the attitude of a particular employer. To be sure, we can never be entirely free of the influence of others, but we can do better in selecting metrics that reflect the work for which we are responsible.
The third sort of measure we can do is measure unintended consequences. Sometimes the result of our efforts backfires, and we work hard but achieve exactly the wrong outcome. For example, adoption of a particular system for measuring physical impairment may increase demand for treatments that are associated with higher impairment scores. Or such a plan may increase the amount of litigation, as parties fight to establish impairment levels that allow them access to increased benefits.
Measuring unintended consequences is difficult because it requires us to think about what people might do in response to our efforts. Naturally, we'd all like to believe that our efforts will have the intended results, and that makes anticipating where things may go wrong difficult. But if we ignore this kind of result, we are likely to be rudely surprised.
Good metrics can help us get a good vision of our progress. Most regulators and nominal insurers spend far too little time in considering what should be measured and why. Thinking about the metrics that would really measure the work that you do is a good way of focusing on what's important. And your thoughts might even help improve the system in which you work.