In this post, I’ll focus on misuses of productivity metrics that cause more harm than good. If you are looking for examples of legitimate use and practical applications of productivity metrics check my previous post “A tale of four metrics”.
Before we jump into examples, let’s try to understand why it’s even possible to get productivity wrong.
Like many aspects of human activity, productivity is intangible. There isn’t one way to quantify it or even one perfect set of metrics that can definitively evaluate it. Instead, to measure it, you can use proxy metrics, testing for activities and capabilities that you consider correlated to it. The problem is the assumptions you make when choosing your proxy measures.
For example, creativity is another intangible characteristic. The author of the Mistborn series, Brandon Sanderson, recently wrote about his process:
I keep a handy spreadsheet to track my work throughout the year - it has a list of dates on the left, and then a column for words written each day (…).
Making progress really helps me feel the book is coming together, and it keeps me motivated. (…)
While it could seem that Brandon optimizes his day for writing the most words possible, that is not the case, as he also shared that he makes sure to block time for being with his family, besides the other activities he engages in, like having to review and edit his drafts. This measure will hardly tell how good the story is or how well received it will be by the readers, still, for him, words written are an easy way to track progress and keep himself motivated, although not the goal in itself.
Applying the previous example to developers, what would measuring their output - like lines of code written, pull requests merged, or stories completed - tell us? Not much: imagine a team with a high number of pull requests merged. It could be that the team is a high-performing one, due to investment in best practices and tools. Or, the high number of pull requests merged could indicate that the team is struggling with their tooling or planning, having to do more work to fix them.
This is the first factor you’ll want to keep in mind when choosing productivity metrics: the distance of proxy metrics causing them to be more or less accurate signals.
Besides potentially giving a false sense of understanding of what’s going on, bad proxy metrics can also focus teams into dangerous paths. Metrics can be a powerful communication and alignment tool but picking the wrong metric will guide a team to pursue a behavior that might not be intended - tackling the symptoms but not the cause. For example, the outcomes of a team that is evaluated by the number of pull requests merged will be very different from those of another who is evaluated by the number of pull requests merged and the rate of pull requests reverted.
That balance of speed and stability is the same principle the DORA framework introduced, based on lean theory and experimental data. They demonstrated four key metrics measuring the performance of engineering systems to be correlated with high performance.
However, the DORA framework uses lagging metrics - they’ll only signal problems long after they happened, too late to act on them on time. That doesn’t mean that they aren’t useful. For concrete examples of how the DORA framework can be used to improve team productivity, read the previous post “A tale of four metrics”.
Making a difference is easy with the right data. If you are interested in measuring these metrics for your team, check our new product, pulse.