💡Invest in infrastructure to facilitate faster and stable builds. Modern testing tools such as the test-at-scale product by Lambdatest are equipped with the intelligence to run only the impacted tests and not the whole test suite, thereby greatly reducing a developer’s waiting time.
High Rework Time
Rework time is the time taken for a developer to delete or rewrite their code shortly after it has been composed, after a PR review. When it comes to rework time, high is not necessarily bad because when PRs receive high quality code reviews, they inevitably lead to higher rework. However, when rework time is too high for your team’s benchmarks, it can be an indicator of issues. So what should managers do?
- Look at the response time of reviews which measures the average time it takes for your reviewers and authors to respond to comments. Your pickup time (first review) might be quick but its often that the authors and the reviewers take long to respond to review comments thus increasing the time it takes to get the code approved.
- Look at the rejection rate of your PRs - how often are PRs not approved in the first review? On the flipside when you have a 100% approval rate, it can also be an issue, especially in large teams, since it can be an indicator of low quality reviews.
- Note if your teams’ tasks are well-specced and monitor to see if rejection rate is high. These can happen often due to tasks being poorly defined without adequate input from the project teams, leading to developers having to make changes to accommodate a different requirement.
- Integrate automatic checks like static code analysis and linting so that reviewers don't need to spend time on nit pics that could be easily caught by machines.
Time to Deployment
Deploy time is the time between when a PR is approved and when it is deployed. Obviously, teams generally prefer low deployment time. This metric corresponds to DORA metrics that can be used to identify opportunities for process improvement. This metric is critical since productivity is all about deployed code - if a code is not deployed then effectively, the output is nil for that effort.
It’s important to note that time to deploy cannot be measured using fixed thresholds since the value would differ based on the kind of service/project under question. For instance, mobile apps will have releases further apart than a micro service which might have more frequent releases. Hence it’s important to identify the right benchmarks for each deployment.
Generally, time to deploy is higher than benchmark if there is a long wait time or flaky builds in CI/CD that delay merges. At other times, when a PR needs multiple or slow tests before deployment, it can lead to increased time to deploy.
Deployment frequency (DORA metric) is the frequency with which a team pushes changes to production. Managers can ensure better productivity by structuring team workflows to allow then to ship smaller, more frequent deployments, thereby optimizing time to deployment, lowering risks, and providing better customer benefits.
Healthy thresholds
Although thresholds depend on workflows and the nature of the projects, whether mobile or a micro service, and/or the repo, the overall cycle time including optimum target times for coding to merge time are generally similar across project types.
For most high-performing teams, Cycle time is less than 4 days. For these teams the breakdown average ranges are:
Coding time <2 days
Pickup time <0.5 days
Rework time <1.5 days
Time to deploy remains specific and highly dependent on the kind of project or repository. Here, the average ranges for mobile are less frequent (from 1 to 4 weeks for consumer apps on average) while for active microservices they tend to be daily.
A trusted source for benchmarking time to deploy is the deployment frequency thresholds in DORA metrics as suggested by the Accelerate DORA actors themselves. More on this, here.