Effectiveness Scorecard
Plan Effectiveness

Is Your Comp Plan Working?

6 weighted metrics across shape, cost, alignment, retention, operations, and sentiment — one composite score that tells you whether the plan is working and where to invest next year.

"Is the plan working?" is the most frequently asked and least rigorously answered question in sales comp. The typical answer is a mix of anecdote, last quarter's attainment number, and whatever the last rep escalation was about. A real answer requires measuring six separate things — each telling you something the others don't — and composing them into a single score you can trend over years.

This scorecard is the capstone of the Module 8 tools. It pulls together outputs from the other tools (Attainment Distribution, Comp Cost, Pay-vs-Performance) and adds three direct inputs (attrition rate, dispute rate, rep sentiment). The composite score answers "plan effectiveness" with a single number, and the per-metric breakdown tells you where next year's redesign focus should go.

The six metrics — and why they're weighted

Outcomes (highest weight)

Voluntary attrition rate (weight 3) and Rep sentiment score (weight 3) — the outcomes everything else drives. A plan that produces good everything-else but loses the best reps is not effective; a plan with imperfect math but high rep trust retains talent and produces results. These are the end-state signals.

Structure (medium-high weight)

Attainment distribution health (weight 2) and Pay-for-performance correlation (weight 2) — how the plan's design manifests in performance and pay alignment. Structure metrics tell you whether the plan is producing the differentiation and fairness it's supposed to.

Operations (medium weight)

Comp cost ratio (weight 2) and Dispute rate (weight 1) — operational health. Important but typically fixable without plan redesign. Weight is lower because these can improve significantly through ops work while the underlying plan stays stable.

Why outcomes weighted highest

Structure and operations are means; outcomes are the end. A team with perfect attainment distribution and strong P4P correlation that nonetheless loses 30% of reps annually and shows low sentiment is not a working plan. The reverse — a structurally imperfect plan with low attrition and high trust — usually is a working plan, at least for the current team. Prioritize fixing outcome metrics even if structural metrics look better.

Plan Effectiveness Scorecard

Enter 6 inputs. Get one composite score + per-metric breakdown + priority list.

ℹ️ How this tool works +

The question it answers: Is my comp plan working as a whole system, and which of the 6 effectiveness dimensions should I invest in improving before next planning cycle?

What to enter (each on its own scale):

Weights:

  • Weight 3: Attrition, Sentiment (outcomes)
  • Weight 2: Distribution, P4P correlation, Cost ratio (structure + ops)
  • Weight 1: Dispute rate (ops)

What you'll get back:

  • Composite 0–100 score with band: Effective ≥80 / Solid ≥65 / Mixed ≥50 / Broken <50.
  • Per-metric bar chart normalized to 0–100 with the weakest metric flagged.
  • Prioritized recommendations ranked by impact (lowest-scoring × highest-weight first).

Sample inputs pre-loaded reflect a moderately healthy plan. Replace with your numbers from the companion tools.

Benchmarks, ranges, and default values in this tool reflect Falcon's practitioner experience across consulting engagements. They are directional starting points, not substitutes for market survey data. For binding compensation decisions, validate key figures against Radford, Mercer, Carta, or WorldatWork survey data for your specific geography, industry, and company stage.

Outcomes · Weight 3
Voluntary Attrition Rate
Outcomes
Resignations / avg headcount this year. Benchmark: 12–18% typical, <10% strong, >20% concerning.
Outcomes · Weight 3
Rep Sentiment Score
Outcomes
0–100 from the Plan Health Check Survey. ≥80 Healthy, ≥65 Solid, ≥50 Warning, <50 Critical.
Structure · Weight 2
Sweet-Spot Attainment %
Structure
% of reps in 80–120% attainment, from the Attainment Distribution Analyzer. Healthy: 60–70%.
Structure · Weight 2
P4P Correlation
Structure
Pearson r from the Pay vs Performance Scatter. Enter as 0.00–1.00. Strong: ≥0.8.
Operations · Weight 2
Comp Cost Ratio
Operations
Total comp / revenue at actual team attainment (%). Typical: 7–11%. Elevated: 11–15%.
Operations · Weight 1
Dispute Rate
Operations
Disputes / deals per cycle (%). Lean: <3%. Acceptable: 3–6%. Concerning: 6–10%.

How to use the composite score

Effective (≥80) — Plan is working

All six metrics at or near benchmarks. Protect the design: next year's plan should be a refinement, not a redesign. Focus on continuity and small targeted improvements to the one or two lower-scoring metrics.

Solid (65–79) — Selective improvement

Most metrics healthy, one or two trailing. The per-metric breakdown points at the specific gap. Focus the off-season plan review on the lowest-scoring metric; leave the rest alone.

Mixed (50–64) — Structural review needed

Multiple metrics showing strain. The plan is producing inconsistent outcomes. Plan a meaningful refresh for next year, driven by the two lowest-scoring metrics. Don't try to fix everything at once — prioritize by weight × gap size.

Broken (<50) — Redesign required

Most metrics in poor territory. This is a system-level failure, not a metric problem. Year-over-year patches won't close the gap; commit to a genuine plan redesign for the next fiscal. Engage executive sponsorship — don't try to fix this inside SalesOps alone.

The Goodhart risk

Once you score the plan, reps and managers will optimize to the score. Don't publish the metric scores to the sales team — use them as internal SalesOps + Finance + HR diagnostic only. Transparency about plan health is healthy; transparency about how plan health is measured invites gaming.

Running your annual plan review?

We help SalesOps teams translate scorecard outputs into specific plan changes and change-management plans. Book a 20-minute review.

Book a 20-minute consultation →

FAQ

How often should I run this scorecard?

Annually, at the start of the plan design cycle (usually Q3 for Jan 1 fiscal). Mid-year for the outcome metrics (attrition, sentiment) if either is trending poorly. Don't run quarterly — several metrics need year-over-year signal to be meaningful.

What if I don't have data for one of the metrics?

Enter 0 for unknown metrics to exclude them from weighting — the composite recalculates on available metrics only. But note: skipping outcome metrics (attrition, sentiment) dramatically weakens the scorecard. Prioritize getting those numbers before running this.

Why not include revenue attainment as a metric?

Revenue attainment reflects the whole business, not the plan. A team hitting 120% of quota in an easy market with bad comp structure, and a team at 92% in a hard market with strong comp, would both score similarly on raw attainment. The metrics here isolate plan effectiveness from market context.

Can I compare scores year-over-year?

Yes — this is the highest-value use. A plan trending from 72 to 78 is getting better; from 72 to 65 is getting worse. Single-year absolutes matter less than the trajectory. Keep a history.