Skip to main content
Table of Contents

LinearB Metrics Glossary

A complete guide to how LinearB defines and calculates core engineering metrics. Use this glossary to align your teams on what each metric means, how it is calculated, and why it matters. Delivery Me…

heather.hazell
Updated by heather.hazell

A complete guide to how LinearB defines and calculates core engineering metrics. Use this glossary to align your teams on what each metric means, how it is calculated, and why it matters.


Delivery Metrics

Cycle Time

Definition

Time from when coding begins on a branch to when the work is released.
Cycle Time is composed of four measurable sub-phases: Coding Time, Pickup Time, Review Time, and Deploy Time.

Calculation

The average duration of the four sub-phases across all completed branches within the selected time range or iteration.

Why it matters

Cycle Time is a core measure of delivery speed. It reflects your team’s ability to deliver software efficiently end-to-end. Understanding where time accumulates helps you identify bottlenecks, drive process improvements, and improve overall flow efficiency.

2026 Benchmarks

Based on the 2026 LinearB Software Engineering Benchmarks Report (8.1M+ PRs across 4,800 organizations, calculated at the 75th percentile/P75):

2026 Benchmarks (hours, P75)
Elite: < 25 hours
Good: 25 – 72 hours
Fair: 73 – 161 hours
Needs focus: > 161 hours

See LinearB's full 2026 Benchmarks Report .

Use these ranges as a reference point when comparing your team’s Cycle Time. If your workspace uses a different aggregation method (Average, Median, P75, P90), focus on direction of change over time as well as how you compare to these bands.


Coding Time

Definition

Time developers spend actively coding before a pull request is opened.

Calculation

From the first commit on a branch until the PR is created:

Coding Time = pr_created_at − first_commit_timestamp
Draft / WIP PR behavior

Draft or WIP pull requests are excluded from Cycle Time and their duration is counted as Coding Time instead.

For configuration details, see Draft Pull Requests .

Why it matters

Coding Time indicates how long work stays in progress before review. Prolonged Coding Time can hide risk (large, unreviewed changes) and make it harder to spot misaligned or blocked work early.

Configuration

An Admin user can choose how Coding Time begins:

  • By first commit (default)
  • By issue state moving to In Progress

Configuration path: Settings → Advanced (Coding Time configuration near the bottom).


Pickup Time

Definition

Time between when a PR is opened (or moved from draft to active) and the first non-author review activity.

Calculation

From PR creation to the first reviewer comment or review:

Pickup Time = first_comment_at − pr_created_at
  • If a PR is approved or merged without comments, Pickup Time is measured from PR open time to the first approval or merge timestamp.
  • Comments from the PR author do not count as review and are ignored for this transition.
Why it matters

Pickup Time captures reviewer responsiveness and prioritization. Long Pickup Time often indicates review bottlenecks, unclear ownership, or overloaded reviewers.


Review Time

Definition

Time from the start of review to when a PR is merged.

Calculation

From the first non-author review activity until the PR is merged:

Review Time = pr_merged_at − first_comment_at
  • Comments or reviews from the PR author (self-reviews) are ignored when determining the review start.
  • If a PR is approved or merged without any review activity, LinearB records Review Time as 0 for that PR.
Why it matters

Review Time reflects how smoothly code review is happening. Long Review Time can signal unclear ownership, overloaded reviewers, or process friction.


Deploy Time

Definition

Time from when a PR is merged until it is released to production.

Calculation

From merge to production release:

Deploy Time = released_at − pr_merged_at
Why it matters

Deploy Time shows how long it takes to move merged work into the hands of users. Long Deploy Time can indicate manual release processes, complex approvals, or slow CI/CD.

Configuration

An Admin user can configure how LinearB detects releases under: Settings → Advanced → Release Detection. Supported methods include:

  • Using Git tags on a release branch.
  • Pull requests into a dedicated release branch (e.g., main, master).
  • Direct merges into a dedicated release branch.
  • API Integration (Deployment API).

Time to Approve

Definition

Time from the start of review to the first approval.

Calculation

From first reviewer comment/review to first approval:

Time to Approve = first_approval_at − first_reviewer_comment_or_review_at
  • PRs approved without any prior comments or explicit reviews are treated as having Time to Approve of 0.
Why it matters

Time to Approve highlights how quickly reviewers make a decision once they start engaging with the PR. Long Time to Approve can point to unclear standards, over-scoped PRs, or misaligned expectations.


Time to Merge

Definition

Time from first approval until the PR is merged.

Calculation

From first approval to merge:

Time to Merge = pr_merged_at − first_approval_at
Why it matters

Time to Merge highlights friction after approval—such as waiting for CI, additional approvals, or release coordination. Large gaps here can hide delays that aren’t visible in review throughput alone.


Quality Metrics

Pull Request Size (PR Size)

Definition

The final difference between the head branch and the base branch when the pull request is created. It reflects the net change in code after all modifications have been applied.

Calculation

Final diff at PR creation:

PR Size = additions + deletions + modifications
Why it matters

Smaller PRs are easier and safer to review. Large PRs tend to increase review time, reduce review quality, and raise the risk of defects in production.


Review Depth

Definition

The average number of comments made during review. This includes general comments, suggestions, or requests for changes.

Calculation

Average comments per PR over the selected period.

Why it matters

Review Depth reflects how thorough reviews are. Too low can indicate rubber-stamping; too high may signal overly large or unclear PRs.


PRs Merged Without Review

Definition

The number of PRs merged with no review activity.

Calculation

Count of merged PRs where the review count is 0 during the selected period.

Why it matters

High “no review” merges can reveal bypassed controls and correlate with production issues or defects.


New Code Ratio

Definition

The percentage of changed lines that are net new code (as opposed to modifications of existing lines).

Calculation
New Code Ratio = new_lines_added ÷ total_lines_changed
Why it matters

Indicates the balance between creating new functionality vs. modifying existing code. Useful for understanding how much of the team’s effort is focused on net new value.


Refactor Ratio

Definition

The ratio of legacy code (for example, older than a configured age threshold) that was modified in the selected period.

Calculation
Refactor Ratio = legacy_code_lines_changed ÷ total_lines_changed
Why it matters

Shows investment in maintainability and technical debt reduction. Higher refactor activity can be healthy when aligned with strategy.


Rework Ratio

Definition

The ratio of recently-changed code (for example, changed within a recent time window) that is modified again in the selected period.

Calculation
Rework Ratio = recent_code_lines_changed ÷ total_lines_changed
Why it matters

High rework can signal churn or instability in new work. It is a quality-risk signal that can indicate unclear requirements or rushed changes.


PR Maturity Ratio

Definition

A measure of how much a PR changes after it is first opened, indicating how “ready” it was at submission time.

Conceptual Calculation

LinearB compares the state of the branch at two points:

  • At PR creation – the initial state of the branch.
  • At PR closure – the final state after all changes.

The ratio reflects the extent of changes made after initial submission (for example, additional commits to address feedback, fixes, or rework).

Why it matters

PR Maturity highlights how well-prepared a PR is before being submitted.

  • Quality indicator: Shows the readiness of work at submission.
  • Efficient reviews: Higher maturity often leads to smoother, faster reviews.
  • Process insight: Low maturity can signal a need for better pre-review checks, clearer acceptance criteria, or smaller PRs.

Throughput Metrics

Code Changes

Definition

Total lines of code changed during the selected period.

Calculation
Code Changes per day = (additions + modifications + deletions) ÷ days_in_period
Why it matters

Provides a high-level view of code volume and output. If you are 2–3 days into an iteration and this value remains low, it may indicate poorly defined tasks or hidden bottlenecks.


Commits

Definition

Total number of commits made in the selected period.

Calculation
Commits per day = total_commits ÷ days_in_period
Why it matters

Shows commit activity levels and can be used to understand participation patterns over time.


PRs Opened

Definition

Number of new pull requests created in the selected period.

Calculation
PRs Opened per day = total_prs_created ÷ days_in_period
Why it matters

Measures how much work is entering the review pipeline and helps balance intake vs. completion.


Merge Frequency

Definition

Average number of PRs merged per day.

Calculation
Merge Frequency = total_prs_merged ÷ days_in_period
Why it matters

A cadence signal for delivery momentum. A healthy merge frequency often correlates with smaller, more frequent releases.


Deploy Frequency

Definition

Average number of deployments per day.

Calculation
Deploy Frequency = total_releases ÷ days_in_period
Why it matters

One of the DORA metrics; it indicates how often code reaches users. Higher deploy frequency (with healthy quality) is associated with more adaptable, resilient teams.


Done Branches

Definition

Branches that reached a “Merged” or “Released” state in the selected period.

Calculation

Count of branches with status in {merged, released}.

Why it matters

Tracks closure rate and helps ensure work is not left lingering as long-lived branches.


Coding Days*

Definition

Normalized count of developer-days with code activity.

Calculation

For each day, measure the fraction of team members who committed at least once. Aggregate across the selected period to produce a normalized value.

Why it matters

Highlights engagement consistency and broad participation across the team, rather than only counting raw volume.

*Metric available by request. To request, please contact your LinearB team.


Balance & Work Breakdown Metrics

Active Days

Definition

Unique days with any Git activity (commits, PRs, reviews) in the selected period.

Calculation

Count of unique days where developer activity > 0.

Why it matters

Shows participation patterns over time and can highlight potential burnout (activity every day) or idle periods.


Active Branches

Definition

Number of branches considered active (not merged or deleted) during the period.

When a branch is “active”
  • It has commits, PRs, or merges within the configured stale period (from any contributor).
  • It has not been merged or closed by the end of the specified time window.
Calculation

Count of branches with status = active.

Why it matters

Active Branches act as a proxy for WIP. Too many active branches can slow development flow and increase context switching.


Work Breakdown

Definition

The distribution of work types (for example, New Code, Refactor, Rework) over the selected period.

Why it matters

Helps you see whether engineering effort is aligned with strategy—such as balancing new feature work, maintenance, and refactoring.


Project Management / DORA Metrics

MTTR (Mean Time to Restore)

Definition

Average time to restore service after a production incident.

Conceptual Calculation

Average duration from incident start to incident resolution:

MTTR = (sum of all incident resolution durations) ÷ (number of incidents)
Why it matters

A core DORA metric that captures resilience and recovery speed. Lower MTTR indicates the team can quickly mitigate customer impact when failures occur.

Configuration

An Admin can define which issues represent incidents and which fields correspond to start/end times under: Company Settings → Project Management → Incidents.


CFR (Change Failure Rate)

Definition

Percentage of releases that cause a failure or incident.

Calculation
CFR = failed_releases ÷ total_releases
Why it matters

CFR is a DORA metric that measures release quality and change stability. High CFR indicates that deployments frequently introduce production problems.

Configuration

An Admin can define how LinearB recognizes incidents for CFR under: Company Settings → Project Management → Incidents.


AI-Specific Metrics

AI Code Review Metrics

Definition

Quantitative indicators for AI-assisted code reviews—for example, how often AI suggestions are accepted, ignored, or followed by further changes.

Example calculation approaches

(Depending on your specific AI setup)

  • Share of PRs that include AI review suggestions.
  • Ratio of accepted vs. dismissed AI suggestions.
  • Follow-up changes or rework on AI-assisted PRs.
Why it matters

AI Code Review metrics help you understand AI adoption and impact: how often AI is used, whether it is trusted, and where it adds—or fails to add—value in the review process.

How did we do?

Get Started - LinearB Essentials Setup Guide

LinearB: Core Concepts

Contact