Skip to main content
Featured

Lead Time for Changes

November 10, 2025By The Art of CTO10 min read
...
metrics

Measure the time from code commit to production deployment. A critical DORA metric that reveals bottlenecks in your development pipeline.

Type:velocity
Tracking: daily
Difficulty:medium
Measurement: Time from commit to production (hours/days)
Target Range: Elite: < 1 hour | High: < 1 day | Medium: 1 day to 1 week | Low: > 1 month
Recommended Visualizations:histogram, line-chart, percentile-chart
Data Sources:GitHub, GitLab, Jira, CI/CD pipelines

Overview

Lead Time for Changes measures the elapsed time from when code is committed to when it's successfully running in production. It's one of the four key DORA metrics and directly indicates how quickly your team can deliver value to customers.

Why It Matters

  • Customer satisfaction: Faster fixes and features improve user experience
  • Competitive advantage: Respond to market changes quickly
  • Developer productivity: Short feedback loops increase learning
  • Business agility: Pivot and experiment rapidly
  • Cost reduction: Shorter cycles reduce context switching and waste

How to Measure

Start Point

Choose one based on your workflow:

  • Commit time: When code is committed to version control (most common)
  • PR creation: When pull request is opened
  • PR approval: When code review is completed
  • Merge time: When PR is merged to main branch

End Point

  • Production deployment: When code is running in production (standard)
  • Release to users: When feature flag is enabled
  • Customer access: When customers can actually use the feature

Calculation

Lead Time = Production Deployment Time - Commit Time

Example:
- Commit: Monday 9:00 AM
- Deploy: Monday 2:00 PM
- Lead Time: 5 hours

Granular Tracking

Break down lead time into stages:

Total Lead Time = Code Review + CI/CD + Testing + Deployment + Verification

Example breakdown:
- Code Review: 2 hours
- CI Build: 15 minutes
- Automated Tests: 30 minutes
- Manual QA: 1 hour
- Deployment: 15 minutes
- Verification: 30 minutes
────────────────────────
Total: 5 hours

1. Percentile Chart (P50, P75, P95)

Best for: Focus on consistent performance

Y-axis: Lead time (hours)
X-axis: Time (weeks)
Lines: P50 (median), P75, P95
Insight: Shows if improvements affect majority vs. outliers

📊 Lead Time Percentiles Over Time

Sample data showing consistent improvement across all percentiles. The green dashed line represents the Elite threshold (< 24 hours). P50 has improved from 18 to 8 hours.

2. Funnel/Waterfall Chart

Best for: Identifying bottlenecks

Stages: Commit → Review → Build → Test → Deploy
Width: Time spent in each stage
Insight: Shows where time is wasted

🔍 Bottleneck Analysis

Code Review is the primary bottleneck at 2.5 hours (40% of total cycle time). Focus optimization efforts here for maximum impact.

Target Ranges (DORA Benchmarks)

| Performance Level | Lead Time for Changes | |------------------|----------------------| | Elite | Less than one hour | | High | Between one day and one week | | Medium | Between one week and one month | | Low | More than one month |

Context Matters

  • Hotfixes: Target < 1 hour regardless of baseline
  • Small features: Elite to High
  • Large initiatives: May take weeks (but deploy incrementally)
  • Infrastructure changes: Consider safety over speed

How to Improve

1. Optimize Code Review

  • Set SLA: Review PRs within 4 hours
  • Smaller PRs: < 400 lines of code
  • Async review: Use detailed descriptions
  • CODEOWNERS: Auto-assign reviewers
  • Pair programming: Real-time review

2. Speed Up CI/CD

  • Parallel testing: Run tests concurrently
  • Caching: Cache dependencies and build artifacts
  • Incremental builds: Only rebuild changed components
  • Faster runners: Use more powerful CI machines
  • Selective testing: Run only relevant tests

3. Automate Testing

  • Unit tests: Fast, run on every commit
  • Integration tests: Run in parallel
  • E2E tests: Run only critical paths in CI
  • Smoke tests: Quick production validation
  • Progressive rollout: Canary with automated monitoring

4. Streamline Approvals

  • Remove unnecessary approvals: Trust + verify
  • Automated compliance: Security scanning in CI
  • Time-boxed reviews: Auto-approve after X hours if tests pass
  • Deploy on green: Automatic deployment after tests pass

5. Continuous Deployment

  • Feature flags: Deploy code, enable features separately
  • Blue-green deployments: Zero-downtime releases
  • Canary releases: Gradual rollout with monitoring
  • Automated rollback: Revert on error threshold

Common Pitfalls

❌ Measuring the Wrong Thing

Problem: Only measuring average, missing outliers Solution: Track P50, P75, P95, and P99 percentiles

❌ Ignoring Dependencies

Problem: Backend PRs wait for frontend, or vice versa Solution: Track lead time per service, identify cross-team delays

❌ Cherry-Picking Easy Wins

Problem: Teams only deploy trivial changes to hit targets Solution: Also track change size and business impact

❌ Sacrificing Quality

Problem: Rushing code to production to hit time targets Solution: Monitor defect escape rate and change failure rate

❌ Not Including Wait Time

Problem: Only measuring active work time Solution: Track total clock time including waits and handoffs

Implementation Guide

Phase 1: Instrumentation (Week 1-2)

# Tag commits with timestamps
git log --pretty=format:"%H %ci" > commits.csv

# Track deployments
echo "$(date -Is) $(git rev-parse HEAD)" >> deployments.log

# Join data to calculate lead time

Phase 2: Visualization (Week 3)

  • Create dashboard with histogram and percentile charts
  • Set up alerts for P95 > threshold
  • Share weekly reports with teams

Phase 3: Optimization (Ongoing)

  1. Identify the slowest stage (review? testing? approval?)
  2. Implement one improvement
  3. Measure impact for 2 weeks
  4. Repeat

Dashboard Example

Executive View

┌─────────────────────────────────────────┐
│ Lead Time for Changes: 4.2 hours        │
│ ████████████████████░░░░░ Elite         │
│                                         │
│ P50: 2.1 hours  P95: 8.3 hours         │
│ Target: < 24 hours    ✓ On Track       │
└─────────────────────────────────────────┘

Team View - Bottleneck Analysis

Stage           Median    P95      % of Total
──────────────────────────────────────────────
Code Review     2.5h      12h      40%   ← BOTTLENECK
CI/CD Build     0.3h      0.8h     5%
Testing         0.5h      1.2h     8%
Approval        1.2h      6h       20%   ← BOTTLENECK
Deployment      0.5h      1h       8%
Verification    0.7h      2h       11%
──────────────────────────────────────────────
Total           5.7h      23h      100%
  • Deployment Frequency: How often are you deploying these changes?
  • Change Failure Rate: Are you sacrificing quality for speed?
  • MTTR: How quickly can you fix issues?
  • Code Review Time: Isolated review bottleneck
  • CI/CD Duration: Isolated pipeline performance

Tools & Integrations

Native Platform Features

  • GitHub: Deployment API + webhooks
  • GitLab: DORA metrics dashboard (built-in)
  • Bitbucket: Pipeline duration tracking

Third-Party Solutions

  • LinearB: Automatically tracks lead time
  • Sleuth: DORA metrics platform
  • Haystack: Engineering metrics
  • Faros: Engineering insights
  • Swarmia: Team analytics

DIY Approach

# Calculate lead time from Git + deployment logs
import git
import pandas as pd

repo = git.Repo('.')
commits = [(c.hexsha, c.committed_datetime) for c in repo.iter_commits()]

deployments = pd.read_csv('deployments.csv')  # sha, deploy_time

lead_times = []
for _, deploy in deployments.iterrows():
    commit_time = next(t for s, t in commits if s == deploy['sha'])
    lead_time = deploy['deploy_time'] - commit_time
    lead_times.append(lead_time.total_seconds() / 3600)  # hours

print(f"P50: {np.percentile(lead_times, 50):.1f} hours")
print(f"P95: {np.percentile(lead_times, 95):.1f} hours")

Questions to Ask

For Leadership

  • Are we delivering value fast enough?
  • Which stage is our biggest bottleneck?
  • Do we have the automation we need?
  • Are certain teams or services outliers?

For Teams

  • Why do some changes take 10x longer than others?
  • What causes the longest delays?
  • If we could eliminate one step, what would it be?
  • Are we reviewing PRs quickly enough?

For Individuals

  • How long did my last 5 PRs take from commit to production?
  • What's the longest I've waited for review?
  • What's slowing down my changes?

Success Stories

Tech Company (B2B SaaS)

  • Before: 2-week lead time, monthly releases
  • After: 6-hour lead time, multiple daily releases
  • Changes:
    • Automated all manual QA
    • Implemented feature flags
    • Removed approval bottlenecks
    • Set 4-hour code review SLA
  • Impact: 56x faster delivery, 90% reduction in customer complaints

E-commerce Platform

  • Before: 3-day lead time, deployments only on Tuesdays
  • After: 4-hour lead time, deploy any time
  • Changes:
    • Parallelized CI tests (40min → 8min)
    • Implemented canary releases
    • Created smaller, focused PRs
  • Impact: 18x faster, improved developer satisfaction by 40%

Advanced Topics

Feature-Flagged Deployments

Lead Time = Time to Production (not Time to User Access)

Separate metrics:
- Code Lead Time: Commit → Production
- Feature Lead Time: Commit → Feature Enabled
- Customer Lead Time: Commit → Customer Access

Microservices Considerations

  • Track lead time per service
  • Identify shared dependencies (databases, shared libraries)
  • Monitor cross-service deployment coordination

Regulatory Environments

  • Document compliance checks in pipeline
  • Automate regulatory requirements
  • Maintain audit trail without slowing deployment

Conclusion

Lead Time for Changes reveals the efficiency of your entire software delivery pipeline. Focus on reducing lead time by identifying and removing bottlenecks—especially in code review, testing, and approvals. Remember: elite performance (< 1 hour) is achievable through automation, trust, and continuous improvement. Start measuring today, find your biggest bottleneck, and optimize incrementally.