Review Time Metric
Definition. Review Time measures the time spent reviewing a pull request (PR) from the first review activity until the PR is merged. Only review activity that occurs after the PR exits draft status i…
Definition
Review Time measures the time spent reviewing a pull request (PR) from the first review activity until the PR is merged.
Only review activity that occurs after the PR exits draft status is considered.
For PRs merged without any review activity, Review Time is not calculated.
Calculation
For PRs merged with a review:
Review Time = PR merged at - PR first reviewed at
Where:
- PR first reviewed at = timestamp of the first reviewer comment or approval
- PR merged at = timestamp when the PR was merged
For PRs merged without a review:
- No Review Time value is calculated.
This ensures the metric reflects actual review effort.

Data Sources
Review Time is derived from:
- PR creation and draft status transitions
- Reviewer activity (first comment or approval)
- PR merge events

Dashboard Behavior
On the Metrics Dashboard:
- The headline value reflects the selected statistic (e.g., P75) for the selected date range and filters.
- The trend chart shows the metric over time based on the selected grouping.
- The tooltip displays the value for the specific time bucket (e.g., daily).
PRs without review activity do not contribute to the Review Time calculation.

Why This Metric Matters
Review Time helps teams:
- Identify review bottlenecks
- Monitor review responsiveness
- Balance speed and code quality
- Improve delivery flow
Reducing Review Time (without reducing review depth) can improve overall delivery velocity.

Benchmarking Guidance
- Review Time should ideally be under 2 days for standard PRs.
- Large or complex PRs may skew results.
- Breaking PRs into smaller units helps maintain healthy review cycles.

Limitations
- PRs merged without review are excluded.
- Very large PRs may distort averages or percentiles.
- Iterative review cycles (multiple comments, change requests) can extend duration.

Stakeholder Use Cases
Developers
- Optimize PR size to reduce review latency.
Reviewers
- Monitor personal response time to maintain review efficiency.
Engineering Managers
- Detect review bottlenecks across teams.
- Balance throughput and code quality.
How did we do?
Review Depth Metric
Reviews Metric