Skip to main content

AI Code Review Metrics

Track how your team is using gitStream AI—see how many pull requests are reviewed, how often suggestions are accepted, and how many lines of code are improved through AI-assisted reviews.

Steven Silverstone
Updated by Steven Silverstone

LinearB’s AI Code Review surfaces potential risks across pull requests, including bugs, security vulnerabilities, performance concerns, and maintainability issues. Unlike traditional rule-based tools that only catch clear-cut problems, the AI takes a broader, more cautious approach.

This means:

  • You may see more findings than expected, including some that turn out to be false positives
  • The AI is intentionally over-inclusive to ensure nothing critical is missed
  • The goal is to provide insight, not to block or approve PRs

We designed it this way to help teams stay ahead of issues and make more informed decisions during review.

As the AI continues to learn from real usage and feedback, the number of false positives will significantly decrease over time — improving both precision and relevance in future reviews.

LinearB’s AI Code Review engine surfaces insights that go beyond code completion. These findings help identify potential risks and improvement areas across all AI-reviewed pull requests.

Each finding category highlights a different aspect of code quality, enabling engineering leaders and developers to track trends, uncover root causes, and take proactive steps to improve the overall health of the codebase.

You can also click the Share icon to generate a direct link to the filtered view for collaboration or documentation purposes.

AI Review Coverage

The percentage of pull requests opened in the selected timeframe that received at least one AI review. This shows how broadly AI code review is applied across new PRs.

Reviewed PRs

The number of unique pull requests that received at least one GitStream AI review during the selected timeframe. Use this metric to track the breadth of AI review coverage over time. A higher number means more PRs are benefiting from AI review, while a drop may point to configuration gaps or reduced PR activity.

PRs Opened

The total number of pull requests created during the selected timeframe. This count includes all PRs opened across repositories connected to LinearB.

Potential Issues Identified

The number of potential issues identified by GitStream AI during the selected timeframe. This reflects the total findings across all categories (bugs, readability, performance, etc.). A higher count indicates more opportunities for improvement, while tracking trends over time helps assess overall code quality.

Issues Resolved

The number of AI-identified issues that were resolved during the selected timeframe. An issue is considered resolved if it was fixed using an AI review suggestion or by the developer independently. LinearB verifies whether an issue is resolved the next time the AI review process runs. This metric helps track adoption and follow-through on AI-flagged findings.

Lines Modified to Resolve Issues

The number of code lines pushed using the “Commit Suggestion” option in the AI review to resolve AI-flagged findings.

This value shows how many lines of code were directly committed based on AI Review suggestions. A higher number typically means the AI handled more complex fixes and reflects more developer time saved through automation.

PRs With Identified Bugs

This metric highlights logic and correctness issues flagged by the AI Code Review engine, including improper control flow, missing validations, or faulty error handling. These are early indicators of defects that could impact runtime behavior or stability. Investigating frequent bugs may lead to improved review processes, better test coverage, or targeted developer training.

PRs With Identified Security Issues

This metric tracks potential security risks identified by the AI engine—such as improper input validation, authentication flaws, insecure communication, or access control issues. It helps security-conscious teams monitor adherence to secure coding practices. Recurring findings may suggest the need for stronger guidelines or security tooling.

PRs With Identified Performance Issues

This metric highlights code inefficiencies, such as algorithmic slowdowns, excessive memory usage, or inefficient network/data handling. Monitoring this helps teams catch potential performance regressions before they impact users. Frequent issues may point to opportunities for optimization or improved engineering standards.

PRs With Identified Readability Issues

This metric measures issues affecting code clarity—such as naming conventions, structural complexity, or inconsistent formatting. High counts may indicate reduced maintainability or onboarding friction. Findings can help teams improve style guides and promote clearer code practices.

PRs With Identified Maintainability Issues

This metric captures structural concerns that affect long-term code health, including tight coupling, duplication, poor modularity, or lack of abstraction. Addressing these early helps reduce technical debt and improve release agility. This data supports strategic refactoring and architectural improvements.

How did we do?

AI Code Review Findings

Active Branches Metric

Contact