Cycle Time

Cycle Time

Cycle time is one of the most interesting engineering execution metrics. It will show the average time it takes for a single engineering task (usually mapped to a branch) to go through the different phases of the delivery process from 'code' to 'production'. Cycle Time refers to the following subsets:

Coding Time:

The time it takes from the first commit until a pull request is issued. In this widget, it's represented as the time frame between when the first commit is made to a branch, and when a pull request is issued.

What do I do if my coding time is high?

Pickup Time:

The time it takes from when a pull request is issued until a review has started. This part indicates how strong teamwork is and it usually has a big impact on the cycle time as very often is where pull requests get stuck. In this widget, it's represented as the time frame between when a pull request is created, to when the first comment occurs on that PR.

What do I do if my pickup time is high?

Review Time:

The time it takes to complete a code review and get a pull request merged. In this widget, it's represented as the time frame between when the first comment occurs on a pull request, to when the PR is merged.

What do I do if my review time is high?

Deploy Time:

The time it takes to release code to production. LinearB tracks each release tag and knows to map it back to branches and present how much time completed tasks waited to be deployed. In this widget, it's represented as the time frame between when the branch is merged to when the code is released. You can customize how Linear B detects your code releases: Read "Set up your Git release detection method" for more information.

Engineering Metrics Benchmarks:

When a complete cycle time measurement is available (including coding time through deploy time). A community benchmark badge will be visible next to your cycle time metric. Click on this badge to see how your team is performing against the LinearB community. Learn more about how LinearB calculates benchmarks, and how to enable or disable benchmarks here: Engineering Metrics Benchmarks

Green - Elite - Top 10% of the LinearB community

Blue - Strong - 11 - 30% of the LinearB community

Orange - Fair - 31 - 60% of the LinearB community

Red - Needs Focus: The bottom 40% of the LinearB Community


How did we do?


Powered by HelpDocs (opens in a new tab)