Attrition data is a lagging indicator. Rep sentiment is the leading one. In our experience, attrition typically lags sentiment scores by 60–120 days — and the comp plan is almost always a factor, not an isolated trigger. Mid-year check-ins catch the decay before it's irreversible.
This survey is designed to be taken by the rep (or by you, stepping into the rep's seat). Ten questions measure sentiment across four weighted dimensions. The output shows overall plan health, the dimension lagging most, and the specific interventions that move the needle on each.
The four dimensions — and why they're weighted
Trust (weight 3)
The highest-impact dimension. Reps who don't trust the plan stop engaging with it — they focus on what they can control (their deals) and disengage from everything else (statements, pipeline calls, coaching). Trust is also the slowest to rebuild once broken. A single missed dispute, a surprise mid-year change, or an opaque statement can degrade trust for 6+ months.
Fairness (weight 3)
Reps compare across territories, peers, and managers — sometimes explicitly, often implicitly. Perceived unfairness (even when objectively defensible) drives attrition faster than actual underpayment. High weight because fairness disputes convert to attrition at 2-3x the rate of plan-mechanics disputes.
Clarity (weight 2)
Reps who don't understand the plan can't optimize for it. Low clarity creates deal-selection inefficiency and a steady stream of dispute inquiries. Medium weight because low clarity has a fixed economic cost even without eroding trust directly.
Motivation (weight 2)
Does the plan actually drive the behavior you want? A clear, trusted, fair plan that doesn't motivate is still a broken plan — reps will hit attainment levels that feel comfortable rather than stretching. Lower weight than trust/fairness because motivation typically follows the other three, not the other way around.
10 questions, each on a 1–5 Likert scale (Strongly Disagree → Strongly Agree), each tagged to one dimension. Your dimension score is the weighted average of its questions on a 0–100 scale. Overall score is the weighted composite. Bands: Healthy (≥80), Solid (≥65), Warning (≥50), Critical (<50). The dimension with the lowest score is flagged as the intervention priority.
Plan Health Check Survey
10 questions, 5-point scale. Answer honestly — the output is only useful if you do.
ℹ️ How this survey works +
The question it answers: How is the comp plan actually landing with reps right now, which dimension is weakest, and where should my communication and plan adjustments focus before end of year?
Who should take it:
- Individual reps (best — aggregate the results across the team).
- SalesOps stepping into the rep's seat (useful for self-audit; less accurate than rep-sourced data).
- Managers scoring on behalf of their reps (middle ground — faster than a team survey but biased toward manager perception).
Scale meaning:
- 1 = Strongly Disagree / 2 = Disagree / 3 = Neutral / 4 = Agree / 5 = Strongly Agree
- Neutral (3) for any question is an amber flag — reps usually have opinions on comp, neutrality often masks "I don't want to complain."
Dimension weights:
- Trust (3) and Fairness (3) — highest weight; drive attrition fastest.
- Clarity (2) and Motivation (2) — medium weight; drive efficiency and behavior.
What you'll get back:
- Overall health score (0-100) with band: Healthy ≥80 / Solid ≥65 / Warning ≥50 / Critical <50.
- Per-dimension bar chart with the weakest dimension flagged.
- Intervention recommendations tailored to the weakest dimension.
For a real team-wide survey, run this across 10+ reps and average the scores. Individual results reveal one rep's experience; aggregate results reveal the plan's experience.
How to act on your score
Healthy (≥80) — Plan is landing well
Reps trust the plan, perceive it as fair, understand it, and find it motivating. Protect this — regular mid-year check-ins, no surprise changes, and maintain statement quality. Don't let complacency become regression.
Solid (65–79) — Minor dimension-specific gaps
One dimension likely trails the others. Targeted intervention in that dimension moves the overall score without requiring a broader plan redesign. Usually fixable in one quarter.
Warning (50–64) — Multiple dimensions showing strain
Reps are signaling active concerns. Attrition risk is elevated even if it hasn't shown up in resignation data yet. Invest in the weakest dimension immediately; plan a structured intervention for end-of-cycle communication.
Critical (<50) — Plan is actively driving disengagement
Retention risk is real and near-term. This is no longer a comp communication issue — it's a trust crisis. Acknowledge publicly, bring in executive engagement, and accept that repair takes 2-3 quarters of consistent action. Don't try to fix by promising; fix by changing.
Most SalesOps teams run plan-health surveys only at year-end — right when they're designing next year's plan. That's too late to affect current-year experience, and the results are dominated by recency bias (last statement's issues, not the year's pattern). Run this mid-year to catch decay while you can still intervene.
Worried about plan health?
We help SalesOps teams run rep-sentiment surveys and translate results into specific mid-year interventions. Book a 20-minute review of your current state.
Book a 20-minute consultation →FAQ
Both — but for different purposes. Individual scores help the specific rep-manager conversation. Aggregate scores reveal the plan's state. Always present aggregate data anonymized to protect individual candor; reps who fear identification give neutral answers and the signal disappears.
At minimum semi-annually (mid-year + end-of-year). Quarterly is better if your plan is complex or recently changed. Weekly is survey fatigue and generates noise, not signal.
The most common reason is fear of manager retaliation or career impact. Protect anonymity explicitly (third-party survey tool, no manager-level drill-down, no rep names in reports). If you can't offer anonymity, the data you get is worth less than no data because you'll act on a biased signal.
Yes — this 10-question version is a baseline. Common additions: role-specific questions (overlay reps have different concerns than primary), product-specific (multi-product plans surface different issues), tenure-specific (new hires vs tenured have divergent sentiments). Just maintain category balance so the dimension scoring stays interpretable.