Skip to main content

AI Metrics Explained

The AI Metrics Glossary explains the metrics shown in AI Analytics. Use this page to understand what each metric measures, what it indicates, and how to interpret changes over time. Metrics may vary…

Steven Silverstone
Updated by Steven Silverstone

The AI Metrics Glossary explains the metrics shown in AI Analytics. Use this page to understand what each metric measures, what it indicates, and how to interpret changes over time.

Metrics may vary by data availability and configuration. If a metric is not shown, your organization may not have the required data source enabled for that period.
Not seeing a metric or seeing an out of sync message? See Troubleshooting Missing AI Metrics for required integrations and common fixes.

AI Adoption & Usage

AI Active Users

The number of developers who used at least one supported AI tool during the selected period. “Active” is based on recorded AI activity (for example, through API integrations) and reflects adoption in practice, not licenses assigned.

AI Adoption Rate

The percentage of Git active users who are also AI active users in the selected period. This metric helps you understand how broadly AI tools are being used across engineers who are actively contributing code.

AI Intensity

The average number of AI actions per AI active user in the selected period. Higher intensity typically means deeper day to day usage, while lower intensity can indicate occasional or lightweight adoption.

Suggestions Volume

The total number of AI generated suggestions recorded in the selected period. Suggestions include actions such as code completions, tab completions, or tool invocations, depending on the AI tools connected.

Suggestion Acceptance

The percentage of AI generated suggestions that developers accepted. This metric is a quality and usefulness signal, and it often changes when teams adjust prompting habits, coding patterns, or AI tool settings.

Code Contribution (LOC)

The number of lines of code contributed in pull requests where AI activity was present during the selected period. This highlights how much code was produced in workflows that involved AI, but it does not claim that AI generated all of those lines.


AI Throughput Impact

PRs Opened

The number of pull requests opened by developers who used AI tools during the selected period. This metric is a throughput indicator and is best interpreted alongside team size and normal PR creation patterns.

Code Changes

The volume of code changes in AI assisted pull requests during the selected period. This typically reflects change size and activity level, and it is most useful when compared across similar time ranges and teams.

Merge Frequency

How often AI assisted pull requests are merged during the selected period. Higher merge frequency can indicate faster completion and integration, but it should be viewed together with quality and review metrics.

Deployment Frequency

The rate of deployments associated with AI assisted work during the selected period. This connects AI usage to delivery output, but it depends on how deployments are tracked and mapped in your environment.


AI Delivery Impact

Cycle Time

The time from the first commit to merge for AI assisted pull requests. This is a core delivery speed metric and is useful for understanding whether AI usage correlates with faster end to end completion.

Coding Time

The time spent coding before a pull request is created for AI assisted work. This focuses on the pre PR phase and can help distinguish faster implementation from faster review and merge.

Checkup Time

The time from the first AI comment to the first human review comment. This measures how quickly human review begins after AI involvement and can reveal whether AI supported workflows are reviewed promptly.

Pickup Time

The time from pull request creation to the first review activity for AI assisted pull requests. This is a responsiveness metric that reflects how quickly reviews start, which can heavily influence overall delivery speed.

Review Time

The duration of the review process for AI assisted pull requests. Review time typically includes the period from first review activity until merge, and it is influenced by both reviewer availability and iteration loops.


AI Quality Impact

PR Size

The typical size of AI assisted pull requests, based on the volume of changes. PR size is a quality and maintainability signal, since very large pull requests can be harder to review thoroughly and more likely to introduce issues.

PR Rework

The amount of change that occurs after review feedback in AI assisted pull requests. Higher rework can indicate review driven improvement, but persistent high rework may point to unclear requirements, rushed changes, or low first pass quality.

PR Maturity

How complete and review ready AI assisted pull requests are when they are opened. Higher maturity typically means fewer missing pieces at creation time, which can reduce back and forth and speed up approval.

PR Refactor

The portion of AI assisted work that reflects refactoring rather than new feature delivery. This helps distinguish output focused on codebase improvement from output focused on adding or changing product behavior.

PR Without Review

The number or percentage of AI assisted pull requests that were merged without a recorded review. This is a governance and risk metric, since skipping review can increase the chance of defects or inconsistent standards.

How did we do?

AI Iteration Summary for Teams

AI Rule Files

Contact