7 Useful Metrics for Agile Teams
by Barry SmithWe often see debates about which metrics agile teams should measure. Most organizations aspire to be “data driven” and hope to find metrics that indicate the effectiveness of their development teams.
Unfortunately, some commonly cited metrics used by agile teams are based on questionable assumptions (or motivations). For example, using comparative measurements of teams to drive rewards is only effective at killing cross-team collaboration—especially if we try to compare individuals within a team context!
Asking, “What are we trying to accomplish?” is a great starting point when considering any metric. At Evergreen, we advocate for metrics that help the team understand its own processes and point the way to improvement. Some of these metrics may be provided by a workflow tool like Jira or Azure DevOps. Others are easily captured and tracked with a spreadsheet or even sticky notes on a team board.
Whatever the metrics, they’ll be most valuable if the teams see their value and take ownership of them. If metrics are forced on them from outside, the teams’ adoption and tracking will be half-hearted. Or worse, they’ll be motivated to game the system without gaining any benefit.
Here are examples of metrics we’ve found effective for agile teams.
To build long-term success: Team morale
An organization’s long-term health depends on maximizing employee retention and fostering stable teams. Nothing predicts workplace success better than a team’s collective culture and outlook. Are they working well together? Do they feel productive? Are they doing meaningful work? Teams can weather short-term ups and downs, but the answers to these questions will determine their long-term outcomes.
Teams can easily incorporate a “mood meter” into their regular work cycle, retrospectives, or even daily stand-ups. Whether the collective mood is rising or falling, it can prompt valuable discussions—either “What’s going well that we should enhance?” or “What’s dragging us down, and how should we address it?”
(To review some of the most compelling research about successful teams, see Nine Lies About Work by Buckingham & Goodall and Google’s publications about effective teams, based on their Project Aristotle.)
For efficient delivery: Escaped defects
If a bug makes it into production, it’s harmful in two ways: it reduces customer satisfaction and is costly for the team. Bugs aren’t benign; they’re one of the greatest sources of waste in a development cycle.
- First, time is spent creating the defect. (It takes just as long to create problematic code as it does quality code.)
- This is followed by downstream impact, which can range from minor user annoyance to hugely expensive downtime for a major system.
- Then, the team must invest more time to identify and fix the defect, which comes at the expense of other valuable work in the backlog that gets delayed. (This opportunity cost is often overlooked.)
For many teams using Agile, the most immediate way to improve their throughput is to focus on process improvements that minimize defects. This has the added benefit of reducing frustration for both the developers and their customers!
To control defect rates: Code test coverage
“Test coverage” is a broad term that refers to various elements: unit tests, functional tests, static code analysis, and other tools applied at the level of individual lines, complete statements, or entire branches. Often, teams begin at the unit test level and expand as their collective experience and level of automation grows.
Wherever teams start, they’ll find that test coverage is usually a leading indicator of release quality. (This assumes the team is sincere about what they’re measuring, of course. It’s easy to inflate unit test numbers, but that typically results from an external mandate to reach a specific threshold.) Steady, incremental improvements in coverage will pay big dividends.
To guide process improvement: Work item cycle time
Usually measured with a control chart, cycle time offers two valuable insights for a team:
- Understanding the mean cycle time helps a team realistically forecast their rate of delivering value.
- Analyzing outliers from the average can pinpoint sources of delay, such as wait times for other teams or impediments to deploying on demand.
By analyzing the factors that shape their cycle time, teams can quickly hone in on improvements that will make the most difference.
To assess overall efficiency: Activity ratio
Activity ratio is derived from two values:
- Processing time: The amount of time spent actively working on an item. Usually this isn’t simply the time a work item is “In Progress.” Meetings, interruptions, and waiting all take away from actual processing time.
- Lead time: The elapsed time from when a work item is added to the queue until it is deployed.
Most teams are shocked to learn that of the total time to deliver an item, often 5% or less is actually spent working on it. The bulk of the lead time is consumed by handoffs, dependencies on other groups, decision latency, interruptions, and other delays. Analyzing the root causes of these delays is a powerful tool for improving efficiency and reducing waste.
Caveat: A potential impediment to observing activity ratio can be the overhead in measuring the true processing time. Lead time is usually easy to report from the team’s workflow tool, but measuring active work time for each item may be tricky. Some teams are willing to use timers to track actual work time while others may rely on a daily post-hoc estimate. This is a case where consistency is much more valuable than precision. The team is better off finding a way to capture rough estimates cheaply and often, rather than adding a lot of overhead for detailed tracking.
To reduce maintenance costs: Technical debt
Development teams must constantly balance the effort they invest in meeting immediate business demands while maintaining a stable and resilient codebase. Stakeholders often just want to see new functionality. Understandably, end users don’t care much about “under the covers” work that doesn’t provide obvious benefits. However, it doesn’t take many “quick and dirty” short-term solutions to undermine a system’s architecture, increase the likelihood of defects, and drive up support costs.
Various tools are available to help assess the health of a codebase, such as by measuring cyclomatic complexity. But most developers are familiar enough with their code to provide a valid assessment, ranging from “pure as snow” to “rewrite the whole thing!” A quick check at the end of every iteration helps keep the team alert to declining quality.
We hope this overview helps you gain value from whatever metrics you choose when using Agile. In a later post, we’ll talk about some metrics that are better to avoid!