Skip to main content
Table of Contents

AI Insights

AI Insights gives you a real-time view of how AI is used across your codebase. See issues flagged during reviews, track adoption across commits and PRs, and measure usage of tools like GitHub Copilot and Cursor. Interactive charts let you drill into details such as active users, acceptance rates, and lines written. Metrics exclude bots and chat-based automation, giving you an accurate picture of true developer + AI collaboration.

Steven Silverstone
Updated by Steven Silverstone

The AI Insights page gives you a real-time view into how AI is impacting your engineering workflow. By combining issue detection, adoption metrics, usage patterns, and repository configuration data, it helps you understand both how developers are using AI day to day, and where AI agents are formally embedded into your projects.

Overview

With AI Insights, you can do the following:

  • See issues flagged by AI during code reviews, with direct links back to the relevant pull requests in GitHub.
  • Track adoption metrics across commits, review comments, and pull requests—filtering for AI-only activity to focus on real developer collaboration with AI.
  • Monitor tool-level usage, such as GitHub Copilot and Cursor, including adoption rate, acceptance rate, and lines of code written with AI assistance.
  • Monitor repository-level adoption through rules files, which show where AI agents are configured and how widely they are spreading across your repos.

Together, these views give engineering managers the visibility to understand ROI, while giving developers transparency into how AI is influencing code quality, speed, and collaboration.

AI metrics reflect human + AI collaboration. Chatbots and automated system accounts are not included. If a commit lists both a developer and an AI tool as co-authors (for example, GitHub Copilot plus a human developer), it is counted as AI-assisted work.

Issues Feed

The Issues Feed panel issues flagged by AI during pull request reviews.

  • Each issue links directly to the relevant PR in GitHub for context.
  • AI comments are attributed separately from developer comments.
  • Bots and chat services are excluded, ensuring only meaningful developer and AI activity is tracked.

At the top of the page, the LinearB AI Code Review feed displays recent issues detected by AI:

  • Columns
    • Time – When the issue was detected.
    • Issues – Category and count of issues (Bug 🐞, Scope 🎯, Maintainability 🧹, Performance 🚀).
    • PR – The pull request where the issue was found. Clicking the link takes you directly to GitHub.
    • Repository – The repository containing the PR.

Example

  • A row showing 3 🐞 (bugs) means three potential bugs were flagged in that PR.
  • Why it matters: Instead of waiting for human reviewers to uncover problems, AI flags them instantly. That shortens review cycles, improves quality, and prevents costly rework later in development.
Click any PR name to jump straight into GitHub, and review the flagged issues in context.

AI Adoption

The AI Adoption panel shows how AI contributes across three types of activity:

  • Commits – Number of commits, separated into manual and AI-assisted (commits where AI contributed as a co-author). Measured by scanning commit messages for co-authors and counting Agent names when present.
  • Review Comments – Number of code review comments split into human-written and AI-generated. Measured by checking PR comments and attributing those created by Agents.
  • PR Authors – Number of unique authors who opened PRs with AI involvement. Measured by tracking new PRs opened by Agents or with Agents listed as co-authors.

At the top of this panel, you can toggle Show only AI activity to filter the view to AI-assisted actions.

Chatbot and bot accounts are excluded from all metrics.

This view helps you measure whether AI is becoming a consistent part of your workflow. Chatbot or bot accounts are excluded so the numbers reflect real developer + AI usage.

AI Rule Files

The AI Rule Files panel tracks repositories that include rule files, which are a strong indicator of AI agent adoption. Different AI agents support different file types and formats, so once data is available you’ll see which tools are being introduced across your repositories and how widely they are being used.

The panel shows the following:

  • Total repositories with rules files – number of repositories currently include at least one rule file. Measured by scanning all repos connected to LinearB.
  • Repositories without rules files – number of repositories that don’t yet contain AI-specific rules. Measured by subtracting repos with rule files from the total count.
  • Breakdown by agent – a comparison of which AI tools (e.g., Claude Code, Cursor) are being used and in how many repositories.
  • Repositories with multiple rule types – cases where more than one agent is being configured in the same repository. Measured by checking for overlaps (e.g., a repo containing both Claude and Copilot rule files).

This visibility helps you understand not just developer-level usage, but also where AI agents are being formally embedded into project workflows through configuration.

Chatbot and bot accounts are excluded from all metrics.

AI Tools Usage

This panel measures usage of AI development tools in your organization. LinearB currently supports GitHub Copilot and Cursor.

For each tool, the following metrics are available:

  • Active users – number of developers actively using the tool. Measured by counting developers who accept AI code suggestions, use AI chat, or trigger PR summaries. Authentication-only events are excluded.
  • Acceptance rate – percentage of AI suggestions accepted into code. Measured by comparing the number of AI suggestions offered vs. accepted.
  • Code acceptance – a trend chart showing how many lines of code generated by AI were accepted over time. Measured by calculating the number of lines accepted from AI suggestions, grouped by day.
Chatbot and bot accounts are excluded from all metrics.

GitHub Copilot

LinearB tracks usage of GitHub Copilot using GitHub’s official Usage API.

  • Active Users – Total number of Copilot users with any activity in a given day, including receiving a suggestion, accepting a suggestion, or prompting chat. Authentication-only events are excluded.
  • Engaged Users – Users who actively interacted with Copilot features, such as accepting suggestions, prompting chat, or triggering a PR Summary. Authentication-only events are excluded.
  • Acceptance Rate – Percentage of Copilot suggestions accepted into code.
  • Code Acceptance (trend chart) – Trend of accepted Copilot-generated lines over time, grouped by day.

Cursor

  • Active Users – Developers using Cursor daily.
  • Code Acceptance (trend chart) – Number of AI-generated code lines accepted, grouped by day.

Deeper Visibility into Coding Behavior

The AI Tools Usage panel (currently limited to GitHub Copilot) gives deeper visibility into coding behavior:

  • Active Users (%)
    • Percentage of developers actively using Copilot.
    • Click to see the number of users who are not using Copilot.
  • Acceptance Rate (%)
    • How often Copilot’s suggestions are accepted.
    • Click to see the breakdown of accepted vs. rejected suggestions.
  • Lines Written
    • Trend of accepted Copilot-generated lines over time.
    • Hover or click to see the number of lines written by date.

This allows you to quickly move from high-level percentages to the underlying counts and trends.

Example: If Active Users are high but Acceptance Rate is low, developers may not trust Copilot’s suggestions yet. If both trend upward, Copilot is delivering real value.

Adoption Funnel

The AI Tools Usage panel also provides a complete adoption funnel:

  1. Are developers trying Copilot?
  2. Are its suggestions being trusted?
  3. Is it driving meaningful code volume?

If a developer uses Copilot as a co-author, their work is included here as AI-assisted. Chatbot activity is not included.

Loading State

When data is still being fetched, AI Insights displays a loading message:

  • Full message:
    We’re still gathering your data. This may take a few moments the first time you load AI Insights, especially if you have a large number of repositories or pull requests. Once complete, you’ll see real-time insights into AI-detected issues, adoption metrics, tool usage, and repository rule files.
  • Short one-liner (for spinners/skeleton screens):
    Fetching AI Insights… your data will appear shortly.

Why It Matters

AI Insights translates raw activity into visibility. For managers, it means clarity into whether AI is reducing technical debt or adding noise, and whether adoption is strong enough to justify investment. For developers, it means understanding how AI impacts reviews, commits, and day-to-day workflows.

Most importantly, AI Insights ties adoption to measurable outcomes—helping answer the questions leadership always asks:

  • Are we shipping better code?
  • Are we moving faster?
  • Are we getting real ROI from our AI tools?

Glossary

  • AI-assisted work – Any commit or PR that lists both a human developer and an AI tool as co-authors.
  • Co-author – A secondary author listed in a commit (for example, a human + Copilot). These commits are counted as AI-assisted.
  • Active User – A developer who used AI assistance (e.g., Copilot) during the reporting period.
  • Acceptance Rate – The percentage of AI-generated suggestions that were accepted into code.
  • Lines Written – The total number of accepted AI-generated lines in the codebase.
  • Excluded activity – Bots, chatbots, and automated system accounts are not included in AI Insights metrics.
  • Agent – A system that uses adaptive models (often ML/LLMs), can adjust to context, and plan beyond fixed scripts. Agents are co-authors when they contribute alongside human developers.
  • Bot – An automation that executes predefined, rule-based, or scripted tasks automatically. Pure bot-only activity is excluded from AI Insights metrics.
Agents Monitored in LinearB

LinearB currently monitors a wide range of AI agents, including but not limited to:

aider, atlassian code reviewer, bito, claude, codeant, codex, codegen, coderabbit, codota, copilot, cubic.dev, cursor, devin-ai, ellipsis, factory.ai, fine, gemini, gitlab duo, gitstream, google jules, graphite, greptile, jazzberry, korbit, meticulous, opencode, qodo, replit, rovo, sourcegraph, sourcery, sweep-ai, tabnine, tusk, what-the-diff, windsurf.

How did we do?

AI Iteration Summary for Teams

Contact