Skip to Content
BoardsAnalysisGoal attainment analysis

What is the Goal Attainment Analysis?

The Goal Attainment Analysis helps you answer a simple question: what’s the probability this metric hits its target by the deadline? It focuses on a single metric and produces a clear on-track signal, a landing range, and action guidance you can use in jobs, reviews, and weekly check-ins.

This is not a guarantee. It’s decision-support: a transparent, uncertainty-aware view of whether you’re likely to hit the goal - and what needs to change if you’re not.


What Value It Gives

  • Single-goal clarity - Get one clean probability instead of debating a dozen moving parts.
  • Early risk detection - Spot “at risk” goals before they miss the deadline.
  • Actionable guidance - See velocity gaps, days remaining, and recommended urgency.
  • Work item accountability - Attribute expected lift to bets, epics, or initiatives (with timing + delivery risk).
  • Better forecasting hygiene - Data-quality warnings help you avoid false precision.

Common Use Cases

  • Weekly goal health checks - “Are we still on track for this KPI?”
  • Single-metric ownership - Give metric owners a clear P(hit) signal for their target.
  • Roadmap reality checks - Test whether planned work plausibly closes the goal gap.
  • Deadline risk reviews - Catch urgent goals that need acceleration or scope changes.
  • Notification jobs - Trigger alerts when a goal flips from on track -> at risk.

How Goal Attainment Is Estimated

Segflow runs a Monte Carlo simulation: it samples plausible futures and counts how often the goal is hit within tolerance.

Under the hood (methodology)

  1. Build the baseline trajectory
    Segflow uses your historical series and/or a baseline forecast (p10/p50/p90) to estimate where the metric could land at the target date.
    If no forecast is provided, it generates a simple trend-based forecast with uncertainty.

  2. Sample a baseline landing
    For each iteration, a landing value is sampled at the target date by interpolating across the percentile bands.

  3. Add work-item impacts (optional)
    Each bet/epic/initiative is sampled from its impact percentiles (or derived from expectedImpact + confidence), then adjusted by:

    • Timing (planned dates + ramp-up)
    • Reach (fraction of users affected)
    • Delivery risk (chance it doesn’t ship)
  4. Test goal hit within tolerance
    A landing counts as “hit” if it clears the target within the tolerance band (default 5% of target value).

  5. Summarize results
    Segflow computes P(hit), landing p10/p50/p90, gap + velocity diagnostics, and a status label.

  6. Handle past deadlines deterministically
    If the deadline has already passed, the model evaluates the actual outcome at the deadline and returns a 0% or 100% P(hit).

Status thresholds

  • safe: >= 85% P(hit)
  • on track: 50% to < 85% P(hit)
  • at risk: 25% to < 50% P(hit)
  • off track: < 25% P(hit)

Action urgency

  • clear - Safe goals (no action needed)
  • monitor - On track but not yet safe
  • action needed - At risk, or velocity is meaningfully below the required pace
  • urgent - Off track with < 30 days remaining

Example: A Monthly Activation Goal (Illustrative)

Scenario: You want to hit 45% activation by month-end. The current rate is 39%, and you have two initiatives: “Improve onboarding” and “Fix activation bugs.”

How you’d use Goal Attainment Analysis:

  1. Set the goal: target value = 45%, target date = month-end, direction = up.
  2. Provide history (and a baseline forecast if you have one).
  3. Add work items with impact ranges, planned delivery dates, and confidence.
  4. Run the analysis.

What you might learn:

  • P(hit) is only 38% -> status = at risk.
  • The current velocity is below the required pace, triggering action needed.
  • The onboarding initiative contributes most, but the bug fix is at risk of delay.

What you do next:

  • Re-scope the work or add a supporting bet to close the velocity gap.
  • Update impact ranges as you learn, then re-run weekly until P(hit) stabilizes.

What You Provide

Required Inputs

  • Goal definition - Target value + target date + direction (up or down).
  • Historical data or baseline forecast - You need at least one of:
    • history points, or
    • a baseline forecast (p10, p50, p90 time series).

Optional Inputs (Advanced)

  • Work items - Bets/epics/initiatives with:
    • impactP10/impactP50/impactP90, or
    • expectedImpact + confidence (low / medium / high).
  • Timing + risk - plannedStartDate, plannedEndDate, deliveryRisk, completionPercentage.
  • Reach - Fraction of users affected (0-1).
  • Tolerance - Default 5%; treated as absolute when target is 0.
  • Metric bounds - Optional min/max for data-quality warnings.
  • Iterations - Simulation runs (default: 5,000).

What You Get Back

Core Outputs

  • P(hit) - Probability of hitting the goal by the deadline (within tolerance).
  • Status - safe / on track / at risk / off track.
  • Landing range - p10 / p50 / p90 at the target date.
  • Expected landing - Median (p50) landing.
  • Current value - Most recent known value.
  • Gap analysis - Current gap, expected gap at deadline, days remaining.
  • Velocity analysis - Current vs required pace and the velocity gap.
  • Action urgency + recommendation - Clear guidance for next steps.
  • Data quality warnings - Flags for short history, extrapolation, or out-of-bounds landings.

Optional Outputs (when work items are provided)

  • Work item breakdown - Per-item contributions (p10/p50/p90), risk-adjusted impact, and delay risk.
  • Baseline P(hit) - Probability without work items (for lift comparison).
  • Total work item impact - Median combined contribution.

How to Interpret Results

  • Start with P(hit) - It’s the clearest signal of risk and priority.
  • Use velocity to diagnose - A low velocity ratio explains why a goal is at risk.
  • Compare baseline vs. with-work - See whether the plan actually changes the odds.
  • Read direction carefully - “Up” and “down” goals compute gaps differently.
  • Trust the warnings - Data-quality flags often explain unstable or surprising outcomes.
  • Re-run often - This model is designed for continuous updates as work ships.

Best Practices

  • Use consistent cadence (weekly or daily) and enough history (30+ days recommended).
  • Keep impact ranges realistic; wide ranges are better than false precision.
  • Set planned dates and delivery risk so timing reflects real execution.
  • Re-estimate after each milestone or scope change.
  • Pair with Bet Impact Analysis to decide which work items most improve P(hit).

Summary

The Goal Attainment Analysis gives you a clean, probability-based view of whether a single metric will hit its target. Use it to track risk, guide action, and make goal reviews concrete - then re-run as the plan changes.

Last updated on