Skip to main content
Table of Contents

DevEx & Platform Engineering

This guide is for Developer Experience and Platform leaders who want to reduce developer friction, remove bottlenecks, and make it easier for teams to ship quality software. May be used alongside the…

heather.hazell
Updated by heather.hazell

This guide is for Developer Experience and Platform leaders who want to reduce developer friction, remove bottlenecks, and make it easier for teams to ship quality software. May be used alongside the PMO & Delivery Operations guide: DevEx focuses on fixing friction in the flow (how work moves), while Platform focuses on paved paths and reliability (how teams use tooling & infra), and PMO focuses on protecting predictability and portfolio outcomes.

TL;DR for DevEx & Platform:

Use LinearB to find where developers struggle (handoffs, reviews, deployment, rework), fix that friction with standards, platform “paved paths,” and automation, and prove impact using delivery metrics, work breakdown, and (when enabled) surveys and AI insights.

Who this guide is for

DevEx and Platform teams succeed when they can prove that tooling, standards, and platform investments actually reduce friction and improve delivery outcomes. This guide assumes you:

  • Care about developer experience as a measurable, improvable system — not just anecdotes.
  • Partner closely with Engineering Managers, Tech Leads, and PMO.
  • Have access to LinearB Delivery & Quality metrics, and optionally: Work Breakdown, Developer Coaching, Surveys, gitStream, and AI Insights (if enabled in your workspace).

What you likely care about

Typical DevEx & Platform responsibilities include:

  • Reducing friction across the development lifecycle (coding, review, release).
  • Owning “paved paths” and internal platforms (golden paths for CI/CD, environments, testing).
  • Standardizing practices and tooling (branching, PR policies, CI/CD, automation).
  • Improving focus time and minimizing unnecessary interruptions and context switching.
  • Improving reliability of the platform (fewer flaky pipelines, faster feedback, stable environments).
  • Partnering with PMO and Engineering leaders to ensure changes in process and platform actually improve real metrics (Cycle Time, quality, predictability).

This guide helps you translate those responsibilities into a concrete DevEx program backed by data in LinearB.


Your recommended path

Use this Guide to:

  • Benchmark developer friction using core delivery metrics (Cycle Time and its sub-metrics: Coding, Pickup, Review, Deploy Time).
  • Find where friction is highest in the lifecycle (handoffs, review, planning, deployment).
  • Pair delivery metrics with work-type signals (New vs Refactor vs Rework) to understand where effort is going.
  • Instrument DevEx with coaching and sentiment data (Developer Coaching and Surveys, if enabled).
  • Scale fixes with low-noise automations using gitStream .
  • Stand up a measurable DevEx program with KPIs tied to teams and initiatives.

Before you begin

Make sure the following are in place:

  • Git integration is connected and active.
  • Teams and contributors are configured so that delivery metrics reflect real teams.
  • Metrics calculation mode (Average, Median, P75, P90) is chosen and documented with EMs.
  • Optional but recommended for DevEx:
    • Work Breakdown is enabled (New / Refactor / Rework).
    • Developer Coaching is enabled for key teams, if available.
    • Surveys are available and configured, if your plan includes them.
    • gitStream and AI Insights are enabled, if available in your workspace.

Step 1: Baseline friction in your delivery flow

Goal: Understand where developers are waiting or struggling across the lifecycle.

1.1 — Start with Cycle Time and its sub-metrics

Where: Metrics → Delivery (Cycle Time, Coding, Pickup, Review, Deploy Time)

  1. Select a representative team (or group of teams) instead of the whole company.
  2. Set a recent time window (e.g., last 4–8 weeks) to reflect current behavior.
  3. Review:
    • Overall Cycle Time trend.
    • Breakdown into Coding, Pickup, Review, Deploy Time.
  4. Identify the slowest phase (where the largest share of Cycle Time is accumulating).
DevEx lens: Don’t start by blaming teams; start by asking: “Where does the system make it hard for developers to move work forward?

1.2 — Compare teams to avoid false positives

  • Look at a few teams with similar work types (e.g., feature teams vs. platform teams).
  • Note where one team experiences much higher Pickup or Review Time than peers.
  • Flag those gaps as candidate friction areas for later investigation.

Step 2: Add work-type context (New, Refactor, Rework)

Goal: Understand how much effort is going into new features vs. fixing or reshaping existing code.

Where: Metrics → Delivery / Quality (Work Breakdown, New Code, Refactor, Rework)

  1. Open the Work Breakdown view for your target team.
  2. Review the ratio of:
    • New Code — net new functionality.
    • Refactor — improving existing code.
    • Rework — recently changed code being changed again.
  3. Cross-check with Cycle Time:
    • High Rework + long Coding Time → friction in requirements or design.
    • High Refactor + long Review Time → friction around standards or legacy code.

Capture 2–3 clear observations per team, for example:

  • “Team A: long Review Time, lots of Refactor work in legacy service X.”
  • “Team B: high Rework on new features, Coding Time spikes near release cutoffs.”

Step 3: Combine metrics with coaching & surveys

Goal: Tie measured friction to how developers actually experience their work.

3.1 — Use Developer Coaching to spot workload and knowledge risks

Where: Developer Coaching

  • Identify developers or teams with consistent overload (too many PRs, reviews, or incidents).
  • Look for knowledge hot spots where a few people are always on critical paths.
  • Compare these patterns with high Cycle Time or Rework areas from earlier steps.

3.2 — Use Surveys to validate themes

Where: Surveys

  • Run a short survey focused on DevEx themes: planning clarity, review experience, tooling reliability, focus time.
  • Slice results by team and compare against their delivery metrics.
  • Use comments to add color to your metric-based hypotheses.

3.3 — Use AI Insights to spot patterns faster

Where: AI Insights

  • Review AI-surfaced patterns around slow reviews, large PRs, or risky changes.
  • Use these insights to prioritize which friction themes to tackle first.

Step 4: Scale fixes with standards and gitStream

Goal: Turn what works for one team into scalable, low-noise standards.

4.1 — Start with lightweight standards

  • Agree on PR sizing guidelines (e.g., preferred range, absolute maximum).
  • Standardize review expectations (e.g., target Pickup Time, review depth, required reviewers).
  • Align on branching and draft PR conventions to avoid polluting metrics.

4.2 — Use gitStream to automate guardrails

Where: gitStream (Managed or Self-Managed)

  • Apply low-noise automations to:
    • Flag oversized PRs for extra review.
    • Auto-label PRs based on risk or directory patterns.
    • Route reviews to the right code experts.
    • Use AI review and AI descriptions to speed up feedback loops.
  • Roll out automations to a small set of teams first, then expand once they’re tuned.
DevEx principle: Start from “helpful defaults”, not heavy-handed gates. The goal is to reduce friction, not create new approval bottlenecks.

Step 5: Turn improvements into a measurable DevEx program

Goal: Move from one-off fixes to a repeatable DevEx operating loop.

5.1 — Define DevEx KPIs

  • Choose a short list of metrics to own, for example:
    • Cycle Time and the slowest sub-metric (e.g., Pickup or Review Time).
    • PR Size distribution.
    • Rework Ratio for key services.
    • Developer sentiment on reviews and tooling (if Surveys are enabled).
  • Document a before/after baseline for each key initiative (e.g., new review policy, CI change).

5.2 — Review with PMO & Engineering leadership

  • Share a simple DevEx scorecard tying:
    • Friction you observed (metrics + sentiment).
    • Changes you made (standards, tools, automations).
    • Measured impact on Cycle Time, quality, and predictability.
  • Align with PMO & Delivery Operations on how DevEx work supports: forecast accuracy, commitment reliability, and portfolio priorities.

5.3 — Close the loop with teams

  • Show teams their own before/after trends instead of only company-level charts.
  • Highlight wins where friction dropped (e.g., Pickup Time faster, fewer oversized PRs).
  • Use that momentum to prioritize the next wave of DevEx investments.

Recommended DevEx & Platform operating rhythm

Weekly (DevEx / Platform + EMs)

  • Open Metrics → Delivery for 1–2 focus teams:
    • Check Cycle Time and the slowest sub-metric (Coding, Pickup, Review, Deploy Time).
    • Confirm whether any recent change (process, tooling, CI/CD) moved the needle.
  • Open Work Breakdown for those teams:
    • Note shifts in New / Refactor / Rework that signal friction (e.g., rising Rework).
  • Capture 1–2 concrete follow-ups for the week (e.g., experiment, standard tweak, CI fix).

Bi-weekly or per iteration (DevEx / Platform with team leads)

  • Review Developer Coaching (if enabled):
    • Look for overloaded reviewers or “knowledge hotspots”.
    • Agree where to rebalance ownership or add paved paths.
  • Review AI Insights (if enabled):
    • Scan for recurring patterns (slow reviews, large PRs, risky areas of the codebase).
    • Turn 1–2 patterns into small experiments (e.g., PR sizing guideline, new gitStream rule).

Monthly (DevEx / Platform + PMO / EMs)

  • Build a simple DevEx snapshot:
    • Key delivery trends for target teams (Cycle Time and slowest stage).
    • Work Breakdown patterns (how much is Refactor / Rework).
    • Top themes from Surveys and Developer Coaching (if enabled).
  • Agree on:
    • 1–2 DevEx initiatives to run (e.g., review policy change, CI reliability work, new paved path).
    • Clear before/after metrics for each initiative.

Quarterly (DevEx / Platform + Director / VP / PMO)

  • Use portfolio-level views (by team or service) in Metrics → Delivery / Quality to:
    • Identify where friction remains highest.
    • Prioritize which systems or services get the next wave of DevEx investment.
  • Review your DevEx scorecard:
    • What changed (standards, tools, automation, paved paths).
    • Measured impact on Cycle Time, quality, and reliability.
    • Where to expand gitStream patterns or platform “golden paths”.

Next steps

  • Use the PMO & Delivery Operations path to align DevEx changes with portfolio planning and commitment tracking.
  • Document 1–2 DevEx initiatives per quarter with clear owners, timelines, and KPIs.
  • Explore additional automation patterns in the gitStream Automation Library .
  • Share key learnings in your internal Engineering or DevEx forum to spread practices across teams.

How did we do?

CTO

Director of Engineering

Contact