Skip to main content

Reviews Metric

The Reviews Metric tracks the total number of review actions taken on pull requests, offering insight into team collaboration, code quality practices, and individual contributions to the review process.

Steven Silverstone
Updated by Steven Silverstone
Definition

Reviews measures the total number of review cycles completed within the selected time range.

review cycle is defined as a distinct review action performed on a pull request (PR), including:

  • Adding a single standalone comment
  • Submitting a review (approve, comment, or request changes)

If multiple separate review actions are performed on the same PR, each action is counted as an individual review cycle.

A single submitted review containing multiple comments is counted as one review cycle.

How the Metric Is Calculated

Reviews is calculated as: Total number of review cycle events within the selected time range

In the dashboard, this value is normalized as:

Reviews per day

The headline value represents:

Total review cycles ÷ Number of days in the selected time range

This normalization allows comparison across different time ranges.

Handling Multiple Review Cycles on the Same PR

If a user performs multiple review actions on the same PR:

  • Each standalone comment counts as one review cycle.
  • Each submitted review counts as one review cycle.
  • Multiple review submissions on the same PR are counted separately.
  • A single submission with multiple comments is counted once.

This ensures the metric reflects the number of distinct review actions, not the number of comments within a submission.

Team Scope Clarification

Review metrics are calculated at the PR level.

If a PR is authored or contributed to by someone within the selected team scope, its review cycles are included in the team’s metrics — even if the reviewer is outside the selected team (e.g., QA or external reviewer).

This ensures the full review lifecycle is captured for team-related work.

How the Metric Is Displayed in the Dashboard

The metric card displays two types of values:

1. Headline Value (e.g., 40.71 Reviews per day)

The large number shown at the top represents the average number of review cycles per day across the selected time range.

This is a daily average — not a total count.

2. Time-Based Values in the Chart

The line chart shows the number of review cycles per time bucket (for example, per day).

Each point represents: The total number of review cycles within that specific time bucket

Clicking a point displays:

  • The total number of review cycles on that date

Daily bucket values do not average to produce the headline; the headline is calculated independently across the full selected range.

Why This Metric Is Useful

Reviews provides visibility into:

  • Review engagement levels
  • Collaboration intensity
  • Review workload distribution
  • Code quality assurance activity

Sustained increases may indicate:

  • Active collaboration
  • Increased PR throughput
  • Higher review participation

Sudden drops may indicate:

  • Review bottlenecks
  • Reduced collaboration
  • Workflow slowdowns
How to Interpret Reviews

Reviews measures review activity volume, not review quality.

It should be interpreted alongside:

  • Review Depth
  • PR Size
  • PRs Merged Without Review
  • Time to Review
  • Merge Frequency

High review counts do not necessarily indicate thorough reviews.

Low review counts do not necessarily indicate weak quality controls.

Context matters — team structure, workflow style, and PR volume influence this metric.

Data Sources

Derived from:

  • Pull Request review events
  • Review submission events
  • Comment events associated with PRs
  • Repository and team scope filters
Tunable Configurations

Reviews may be influenced by:

  • Team scope filters
  • Repository filters
  • Inclusion of specific review event types
  • Branch inclusion/exclusion rules
Limitations
  • Measures review activity, not review quality.
  • Multiple small comments may inflate counts.
  • Does not measure depth or effectiveness of feedback.
  • Automated review bots may influence totals.
  • Small datasets may produce volatility.

Reviews reflects review cycle frequency, not code quality in isolation.

Stakeholder Use Cases

Engineering Managers

  • Monitor collaboration intensity.
  • Detect review bottlenecks.
  • Evaluate review workload balance.

Team Leads

  • Balance reviewer workload.
  • Ensure consistent review coverage.

Developers

  • Track review participation.
  • Identify imbalances in review contributions.

Product Leadership

  • Monitor review flow relative to delivery timelines.
  • Detect workflow slowdowns before release milestones.

How did we do?

Review Time Metric

Rework Metric

Contact