Table of Contents
CTO
This guide is for CTOs and Heads of Engineering who need a clear, executive-level view of engineering health, delivery predictability, and where to invest next. It focuses on using LinearB to align d…
This guide is for CTOs and Heads of Engineering who need a clear, executive-level view of engineering health, delivery predictability, and where to invest next. It focuses on using LinearB to align delivery reality with business priorities, while keeping metrics focused on system improvement – not individual scoring.
Time Required: 8–12 minutes to orient, 20–30 minutes to set an exec cadence
Difficulty: Easy
TL;DR
- Use Metrics → Delivery for org-level flow trends and variability by team.
- Use Teams → Iterations (Completed) to understand planning stability and unplanned work patterns.
- Use Resource Allocation (and, if used, Cost Cap) for FTE and investment alignment.
- Use Surveys and Developer Coaching to add sentiment and workload context.
- Use gitStream and AI Insights to scale low-noise standards where they matter most.
Start here in 15 minutes
1) 5-minute health scan
- Open Metrics → Delivery:
- View Cycle Time by team for the last 4–8 weeks.
- Note teams with slowest Cycle Time or highest variance.
2) 5-minute predictability check
- Open Teams → Iterations (Completed) for 2–3 key teams:
- Compare planned vs. completed work and carryover.
- Look at unplanned and unlinked work shares.
3) 5-minute investment check
- Review your latest Resource Allocation view or report:
- Check FTE distribution across products / initiatives or investment categories.
- Confirm it roughly matches the strategy you expect.
Capture 3 bullets for your next leadership meeting: (1) biggest delivery constraint, (2) biggest predictability risk, (3) biggest misalignment between investment and strategy.
Overview
At the CTO level, your focus is consistent outcomes across the organization: predictable delivery, healthy engineering systems, and measurable returns on platform and AI investments. LinearB connects planning data with real Git activity so you can guide strategic priorities with evidence and hold teams accountable to systems and practices, not vanity metrics.
What you likely care about
- Where are the biggest system constraints across teams?
- How confident should we be in our current delivery commitments?
- Is unplanned work consistently eroding capacity or roadmap execution?
- Do we need enablement, platform investment, or policy changes to unblock teams?
- Which standards can we scale without adding process overhead?
- Are we investing engineering effort in the right products, initiatives, and categories?
Where to spend time in LinearB
Metrics → Delivery (Org-level flow)
- View Cycle Time and its stages (Coding, Pickup, Review, Deploy Time) by team.
- Look for:
- Teams with materially slower Cycle Time than peers.
- High variance or recent trend breaks (improving or degrading).
- Use this as your primary signal for system constraints and improvement targets.
Teams → Iterations (Completed) (Planning stability)
- Review for key teams:
- Planned vs. completed work per iteration.
- Carryover across sprints.
- Share of unplanned and unlinked work.
- Use this to understand predictability and whether roadmap work is consistently protected.
Resource Allocation (Investment alignment)
- Use the Resource Allocation dashboard or report to see:
- FTE distribution across products, initiatives, epics, or issue types.
- How much capacity is going to maintenance vs. new value.
- Compare actuals to your strategic targets. If they diverge, you have an execution gap.
Cost Capitalization (If applicable)
- If you use LinearB’s Cost Cap reporting, review:
- Capitalizable vs. non-capitalizable work by your chosen dimensions (e.g., initiative, project).
- Trends that affect finance and audit conversations.
- Use this to align with Finance on how engineering time is reported, not to drive day-to-day delivery decisions.
Surveys & Developer Coaching (Human + workload context)
- Use Surveys (if enabled) to capture:
- Per-team sentiment on planning clarity, review experience, tooling, focus time.
- Use Developer Coaching (if enabled) to identify:
- Overloaded developers and knowledge hotspots on critical paths.
- Bring these into exec reviews to explain why certain trends exist, not just what they are.
gitStream & AI Insights (Scaling standards)
- Use AI Insights (if enabled) to surface recurring patterns:
- Slow reviews, large PRs, or risky changes concentrated in specific services or teams.
- Use gitStream to scale low-noise standards, for example:
- PR size and risk guardrails.
- Routing reviews to the right code experts.
- Lightweight AI review and AI descriptions to speed feedback.
- Focus these on systems, not individuals: “How we review” vs. “who is slow”.
Investment, allocation & cost capitalization
At the org level, speed is only part of the story. You also need a defensible view of where engineering effort goes and whether it matches strategy and financial expectations.
- Resource Allocation:
- Breaks down effort (FTE) across projects, epics, initiatives, issue types, and other configured work units.
- Use it to validate whether actual investment matches your top priorities.
- Investment Strategy / categories:
- If your Resource Allocation setup includes an investment strategy or category field, track how much FTE goes to new features, platform, maintenance, risk, or compliance.
- Use this in budget and roadmap conversations to discuss tradeoffs with data.
- Cost Capitalization (if used):
- Reduces manual finance work by classifying capitalizable vs. non-capitalizable effort according to your rules.
- Supports audit-ready reporting that ties back to the same underlying allocation model.
Review these trends monthly and before quarterly planning or budget reviews.
Recommended CTO operating rhythm
Monthly (CTO + Directors / EMs)
- Review a delivery snapshot from Metrics → Delivery:
- Org-level Cycle Time and stage trends by team.
- Teams with highest variance or noticeable degradation.
- Review Teams → Iterations (Completed) for 2–3 key teams:
- Planned vs. completed work, carryover, and unplanned work share.
- Agree on 1–2 system changes (process, standards, platform) to test next.
- Check latest Resource Allocation view:
- Confirm FTE distribution matches strategy (e.g., core products, new bets, platform).
- Call out any major misalignments for follow-up.
Quarterly (CTO + VP / PMO / Finance / DevEx)
- Use Resource Allocation and (if applicable) Cost Cap to:
- Review investment mix across products, initiatives, and categories.
- Align on where to increase or reduce engineering spend.
- Use Metrics → Delivery and Iterations (Completed) to:
- Evaluate predictability and throughput of strategic teams.
- Set realistic delivery expectations for the next quarter.
- Agree on:
- Top 2–3 engineering system initiatives (e.g., review standards, platform reliability, AI enablement).
- Clear before/after metrics for each initiative.
Ad hoc (When trends shift)
- When you see a significant change (positive or negative) in Cycle Time or predictability:
- Drill down by team in Metrics → Delivery.
- Check Iterations (Completed) for scope and unplanned work changes.
- Use Surveys, Developer Coaching, and AI Insights (if enabled) for qualitative and workload context.
Recommended next articles
How did we do?
AI Enablement Lead/Manager
DevEx & Platform Engineering