Skip to main content
Table of Contents

Director of Engineering

This guide is for Directors of Engineering who own multiple teams or domains and need a repeatable way to connect delivery, quality, and investment decisions. It uses Metrics → Delivery , Metrics → Q…

heather.hazell
Updated by heather.hazell

This guide is for Directors of Engineering who own multiple teams or domains and need a repeatable way to connect delivery, quality, and investment decisions. It uses Metrics → Delivery, Metrics → Quality, Teams → Iterations, and (where enabled) Resource Allocation and gitStream to create a portfolio view you can use with VPs, PMO, and DevEx.

TL;DR – Director of Engineering:

  • Use Delivery and Quality metrics to compare teams and services, not individuals.
  • Use Iterations to see how planning, unplanned work, and predictability vary across teams.
  • Use Resource Allocation (if available) to connect headcount to initiatives and cost.
  • Support gitStream and AI adoption where they remove friction or de-risk flow at scale.
  • Run a light but consistent monthly portfolio review with your managers.

Start here in 15 minutes

  1. Pick 3–5 teams you directly manage.
  2. In Metrics → Delivery, set the window to the last quarter and note, per team:
    • Cycle Time and which sub-metric is slowest.
  3. For each team, open Teams → Iterations → Completed and ask:
    • “Roughly how much work was unplanned?”
    • “Is carryover a pattern or an exception?”
  4. Classify each team quickly:
    • Healthy, Stretched, or At risk.
  5. Pick one team in each bucket and write one question you’ll ask that EM in your next 1:1 (e.g., “What’s the smallest change we can try to reduce unplanned work?”).
  6. Use these notes as the first slide or section in your next monthly review.

Who this guide is for

  • Directors owning multiple teams, domains, or services.
  • Leaders responsible for both outcomes (delivery, quality) and investment choices (where teams spend time).
  • Partners to VP Eng, PMO/Delivery Ops, and DevEx/Platform.

What you likely care about

  • Which teams or domains are consistently healthy vs. struggling on delivery and quality.
  • How unplanned work and incidents affect commitments across your portfolio.
  • Whether your headcount and investments match strategic priorities.
  • Where standards and automation (gitStream, AI) can scale improvement.

Before you begin

  • Teams and services are mapped correctly in LinearB.
  • Managers understand which metrics you care about and why (Cycle Time stages, key quality/reliability indicators).
  • If available, Resource Allocation is configured with relevant custom fields (initiatives, project types, etc.).
  • If your org uses gitStream or AI Insights, you know which teams are piloting them.

Step 1: Build a simple delivery & quality portfolio view

Goal: Compare teams on a few consistent, system-level signals.

Where: Metrics → Delivery and Metrics → Quality

  1. Select a timeframe (e.g., last quarter) and a consistent aggregation mode (Average, Median, or percentile as agreed with your org).
  2. Review per-team trends for:
    • Cycle Time and its slowest sub-metric.
    • High-level quality / reliability signals where available.
  3. Classify teams into rough buckets:
    • Healthy: stable delivery, predictable quality.
    • Stretched: growing unplanned work, slowing stages.
    • At risk: repeated incidents or highly variable delivery.
Director lens: Your job is to ask “What’s making this system behave this way?” and equip EMs to own the improvement story.

Step 2: Use Iterations to understand predictability

Goal: See how planning and unplanned work differ across teams.

Where: Teams → Iterations (Completed)

  • For each team, review a few recent iterations and note:
    • Planned vs. delivered work.
    • Level and sources of unplanned work.
    • Patterns of carryover across sprints.
  • Use this to draw distinctions like:
    • “Team A has stable commitments but lots of incidents.”
    • “Team B over-commits and regularly pushes work out.”
  • Turn those insights into coaching questions for EMs: “What’s the smallest process change we can try next sprint to improve predictability?”

Step 3: Connect investments using Resource Allocation (if enabled)

Goal: Show where your org actually spends time and cost.

Where: Resource Allocation dashboard and related Cost/Allocation reports

  • Slice by initiatives, projects, or investment categories to see FTE spread.
  • Compare:
    • Where teams spend time vs. your strategic priorities.
    • Work type mix (e.g., new features vs. maintenance) for key areas.
  • Use this to support decisions like:
    • Rebalancing teams or roadmaps.
    • Justifying more capacity for foundational or reliability work.

Step 4: Support standards, gitStream, and AI at scale

Goal: Help managers and DevEx apply low-noise standards where they matter most.

  • Identify 1–2 cross-team bottlenecks (e.g., slow reviews, large PRs, high rework).
  • Partner with DevEx / Platform to:
    • Define simple standards (PR size, review SLAs).
    • Use gitStream and AI features (where enabled) to automate guardrails.
  • Ask teams to measure:
    • Before/after trends on Cycle Time or quality.
    • Developer sentiment where Surveys or Coaching are available.

Recommended portfolio rhythm

Monthly (Director-led, with EMs)

  • Review a portfolio snapshot:
    • Delivery and quality trends by team.
    • Key Iteration patterns (predictability, unplanned work).
    • Top 1–2 investment questions from Resource Allocation (if enabled).
  • For each team, agree on:
    • One system improvement to test (process, standard, automation).
    • How you’ll measure success in the next cycle.

Quarterly (Director-led, with VP / PMO / DevEx)

  • Use portfolio data to inform:
    • Roadmap and investment tradeoffs.
    • Org changes (team splits, new ownership).
    • Where to expand standards, gitStream, and AI.

  • Recommended next articles

    How did we do?

    DevEx & Platform Engineering

    Engineering Manager

    Contact