Skip to main content

LinearB AI Insights FAQ

LinearB AI Insights provides complete visibility into how AI tools affect your engineering performance. This FAQ explains how AI activity is detected, how metrics like acceptance rates and DORA indicators are calculated, and how integrations such as MCP and AI Reviews work. You’ll also find guidance on security, data handling, and configuration best practices — everything you need to understand and maximize the value of AI-assisted development with LinearB.

Steven Silverstone
Updated by Steven Silverstone

AI Insights helps you understand how AI tools influence your engineering performance — from code generation and review to developer trust and DORA metrics.

This FAQ answers common questions about AI detection, metrics, reviews, security, MCP access, and more.

  • AI Insights FAQ
    Learn how LinearB detects AI-generated code and what AI Rule Files do.
  • AI Metrics FAQ
    Understand Acceptance Rates, filtering options, and how AI metrics connect to DORA performance.
  • Security and Data Handling FAQ
    See how LinearB protects your code and processes only metadata but never your source code.
  • MCP and Integration FAQ
    Explore how to use the Metrics Control Protocol (MCP) to connect tools like Claude or ChatGPT and query LinearB data.
  • AI Review FAQ
    Learn how to customize LinearB’s AI code reviews, add guidelines, and align reviews with your coding standards.
  • Pricing FAQ
    Check if there’s an additional cost for AI Insights and Surveys features.
  • Miscellaneous FAQ
    Find answers to additional questions, including data visibility, language filtering, and role-based access controls.

AI Insights FAQ

What are AI Rule Files?

AI Rule Files are configuration files that provide instructions to AI agents or tools for code generation and modification. They define project-specific structures, coding standards, best practices, and constraints; ensuring AI output aligns with your organization’s development guidelines.

How does LinearB identify AI-generated code?

LinearB detects AI involvement by analyzing metadata signals in your connected Git repositories:

  1. Commit co-authors — Identifies AI agents listed in commit messages.
  2. AI comments — Detects comments generated by AI agents in pull requests.
  3. AI-authored PRs — Finds pull requests opened by known AI agents.

This approach enables AI detection across commits, authored PRs, and rule files, without requiring additional integrations.

Do you show rejection reasons for AI-suggested code?

No. The AI Insights dashboard displays Acceptance Rates, representing how often AI suggestions are approved. The inverse of this rate would represent rejected code.

How does LinearB distinguish between human and AI-written code?

For general detection, LinearB identifies AI involvement through commit messages and co-author tags.

For LinearB AI Review, the system also tracks suggested fixes in the relevant code areas. When a fix is applied, it contributes to the Acceptance Rate metric.

How is AI usage tied to engineering impact?

LinearB labels PRs co-authored by AI to measure their effect on quality and throughput. These insights are reflected in your DORA metrics (e.g., Change Failure Rate, Deployment Frequency), showing how AI adoption influences delivery performance.

How can we measure AI trust and adoption among developers?

Start with a gradual rollout, such as enabling AI PR Descriptions, and collect feedback from developers.

Monitor AI Acceptance Rates over time in the AI Insights Dashboard. Increasing acceptance rates often indicate growing trust and comfort with AI-assisted development.

AI Metrics FAQ

Can we filter AI metrics by team or project?

Currently, you can filter by teamservicerepository, and PR label. Filtering by project is on the roadmap.

What does the baseline in the AI Review Dashboard represent?

The baseline shows non-AI-generated code during the selected reporting period.

This allows side-by-side comparisons between AI-assisted and manual work.

How can we analyze AI co-authored tags from git history at scale?

Use LinearB’s AI labeling feature via gitStream to identify and aggregate AI-coauthored PRs.

You can view this data across your organization or by team.

In future releases, tracking will be automatic; no manual labeling required.

Can we compare DORA metrics between AI-assisted and non-AI-assisted work?

Yes. Using gitStream labels, you can compare DORA metrics such as Cycle Time and Deployment Frequency across AI-assisted vs. manual development.

  • Essentials plan: Cycle Time, Deployment Frequency
  • Enterprise plan: All four DORA metrics (including CFR and MTTR)
Can LinearB backfill AI Insights data?

Yes. LinearB can backfill historical data on day one, providing immediate visibility into past AI activity.

Security and Data Handling FAQ

Does the AI Insights Dashboard require gitStream?

No. AI Insights automatically detects AI-generated activity without requiring gitStream.

Where do Acceptance Rate numbers come from?

Acceptance Rates are sourced directly from the Copilot and Claude APIs.

What data does LinearB collect?

LinearB only processes metadata signals (such as commit authors, PR comments, and rule file names).

Your source code is never indexed or stored.

How secure is this data?

All processing occurs within your authorized Git provider connections.

LinearB never transfers or stores raw code content.

MCP and Integration FAQ

What tools are available through the MCP server?

LinearB’s MCP server exposes multiple API endpoints for structured data access, including:

  • metrics
  • repositories
  • contributors
  • releases
  • incidents
  • issues
  • services
  • users
  • teams
  • branches
  • pull_requests
  • pm_entities
Does MCP enforce permissions or role-based access?

Not yet, but permissions-scoped access is in development.

Can I connect Claude or ChatGPT to LinearB’s MCP?

Yes — Claude (and any tool supporting local MCP) can connect directly.

ChatGPT will be supported once HTTP + OAuth integration is released.

Does MCP only access Git data?

No. MCP also accesses project management information such as Lead Time and will soon support more PM data from tools like Jira.

AI Review FAQ

How can I fine-tune AI Reviews?

Go to Settings > AI Tools > AI Review and edit your Guidelines.

You can add rules such as “Ignore formatting changes” or “Focus on security improvements.”

Can AI Reviews follow my organization’s coding standards?

Yes. You can customize review behavior and criteria in Settings > AI Tools > LinearB AI & Automations > AI Reviews > Edit.

Self-managed users can refer to the Configuring AI Reviews documentation for setup steps.

Pricing FAQ

Is there an additional cost for AI Insights or Surveys?

No. Both features are included for all LinearB customers.

Miscellaneous FAQ

Do you provide a language-level breakdown of AI usage?

Not yet. You can currently filter results by teamservicerepository, or PR label.

Can I view individual developer data?

Yes. LinearB supports both team-level and individual-level views.

You can disable individual visibility through role-based access control if desired.

How did we do?

Issues Feed

Managing AI Services and Tools in LinearB

Contact