Engineering teams measure many things. Deployment frequency, mean time to recovery, code coverage, sprint velocity, open pull request age. These are all useful. None of them tell you whether engineering is actually moving the business. That requires a different kind of metric.
The Measurement Gap
The gap between engineering metrics and business metrics is one of the most persistent sources of misalignment in technology organizations. Engineering leadership can present dashboards full of green indicators — high velocity, low incident rate, excellent code quality scores — while the business is experiencing slow time-to-market, declining customer satisfaction, or missed revenue targets. The metrics were accurate. They were measuring the wrong things.
The DORA Framework as a Starting Point
The DORA (DevOps Research and Assessment) metrics — deployment frequency, lead time for changes, change failure rate, and mean time to restore — are the most evidence-backed engineering performance indicators available. They predict business outcomes because they measure the properties of a software delivery system that create competitive advantage: speed, reliability, and the ability to recover quickly from failures.
“DORA metrics are not a destination. They are a diagnostic. Elite performers on DORA are not elite because they measured well — they are elite because the practices that drive those metrics also drive business outcomes.”
Business-Linked KPIs
Beyond DORA, every engineering team should have at least two KPIs that directly measure business outcomes. The specific metrics depend on the business, but the principle is constant: connect engineering output to the outcomes the business cares about. For a consumer product, this might mean time-to-feature (how long from decision to customer) and feature adoption rate (what percentage of users engage with new capabilities). For an infrastructure platform, it might mean cost-per-transaction at scale and system reliability measured in revenue impact of downtime.
The Anti-Metrics
- Lines of code: measures output, not outcomes. Can be increased by writing worse code.
- Story points completed: a measure of throughput, not direction. A team can complete many story points while building things nobody uses.
- Bug count: a lagging indicator that measures what went wrong, not whether the team is improving its quality practices.
- Ticket closure rate: measures activity, not impact. Closing tickets faster has no inherent value if the tickets closed are not the ones that matter most.
Building the Measurement System
The right measurement system for an engineering team is one that leadership can use to answer two questions: is engineering moving in the right direction, and is it moving at the right speed? Direction requires business-linked outcome metrics. Speed requires delivery metrics like DORA. Both together provide the view that connects engineering investment to competitive position — which is ultimately what all measurement should be in service of.