P4P Scatter
Pay for Performance

Is Your Plan Actually Paying for Performance?

Plot rep pay against attainment. Flag the inversions — high performers being underpaid, low performers being overpaid — and see the correlation that proves or disproves your P4P story.

"We pay for performance" is the promise every comp plan makes. Whether it's true is an empirical question. Plot total comp against attainment for every rep, and the answer is immediately visible — if the two variables are positively correlated with low variance, the plan is working. If top performers are being outearned by mid performers, or low performers are being paid like top ones, the plan has inverted and your reps know even if you don't.

This tool takes a list of rep attainment + total comp data, plots the scatter, computes the correlation coefficient (how strongly pay tracks performance), and flags individual reps whose pay position is inverted against their peers. The output gives you the evidence to either defend the current plan or to justify structural changes.

The two metrics that matter

Correlation (how strongly pay tracks attainment)

A correlation of 1.0 means every additional percentage point of attainment produces a corresponding increase in pay — perfect alignment. Falcon uses 0.8+ as strong, 0.6–0.8 as moderate, below 0.6 as loose — these are practitioner thresholds, not statistical standards. Moderate correlation comes with enough noise that reps will correctly perceive some unfairness. Below 0.6 means pay is only loosely tied to performance; factors other than attainment (tenure, base differentials, SPIFFs) dominate the pay signal.

Inversions (individual cases where alignment breaks)

Even with strong correlation, specific rep comparisons can break the P4P promise. Two patterns: inverted-high-attainment (a rep at 120% attainment earning less than a peer at 90%) and inverted-high-pay (a rep earning top-tier comp despite below-median attainment). Either pattern, if visible to the team, corrodes trust regardless of what the overall correlation looks like.

Why you need both correlation and inversions

Correlation is the systemic health check. Inversions are the individual-case review. A team with 0.9 correlation can still have 2-3 visible inversions that everyone talks about — those matter for trust even if statistically marginal. Meanwhile, a team with 0.5 correlation has systemic drift, which may not manifest as one dramatic case but pollutes overall fairness perception. Fix correlation by plan design; fix inversions by individual pay or territory adjustment.

Pay vs Performance Scatter

Enter rep attainment + total comp. We plot, correlate, and flag inversions.

ℹ️ How this tool works +

The question it answers: Does my plan actually reward performance — and where specifically does the pay-for-performance link break at the individual-rep level?

What to enter — one row per rep:

  • Rep name / ID — any label (anonymous IDs preferred for survey-style use).
  • Attainment % — the rep's year-to-date or annual quota attainment (e.g., 115 for 115%).
  • Total comp $ — fully-loaded pay for the period (base + commission + SPIFFs). Enter in thousands or full dollars consistently across all rows.

What the tool computes:

  • Pearson correlation coefficient between attainment and pay.
  • Linear trend line (least squares fit).
  • Inversion flags: reps >1.5 std-dev below the trend line (underpaid for performance) or >1.5 std-dev above (overpaid for performance).

What you'll get back:

  • Correlation coefficient with band: Strong ≥0.8 / Moderate ≥0.6 / Weak <0.6.
  • Scatter plot with trend line and inversion points highlighted in red/amber.
  • Flagged-rep table with the specific pay/attainment pairs that break P4P alignment.
  • Recommendations tailored to correlation strength and inversion count.

Sample team (8 reps) pre-loaded. Replace with your team's data and rerun. Minimum 5 reps for correlation to be statistically meaningful.

Benchmarks, ranges, and default values in this tool reflect Falcon's practitioner experience across consulting engagements. They are directional starting points, not substitutes for market survey data. For binding compensation decisions, validate key figures against Radford, Mercer, Carta, or WorldatWork survey data for your specific geography, industry, and company stage.

Rep name / ID
Attainment %
Total comp $

How to act on your diagnosis

Strong correlation (≥0.8), few inversions

Plan is working as designed. P4P promise is empirically true. Protect this — any plan changes should be stress-tested against the scatter to make sure you're not introducing drift.

Strong correlation, 1-2 inversions

Overall plan is healthy but specific cases break alignment. Investigate the flagged reps individually — often tenure-base differential, territory luck, or a SPIFF overlay explains the inversion. Fix with individual adjustment (comp change, territory change) rather than plan redesign.

Moderate correlation (0.6–0.8)

Pay is loosely tied to performance. Factors other than attainment (tenure base differentials, SPIFFs that dominate commission, non-rep contributors getting overlay credit) are materially influencing pay. Audit these layers and decide whether each is correctly weighted.

Weak correlation (<0.6)

P4P is marketing, not reality. Reps will figure this out regardless of what the plan document says. Either the plan structure is broken or the data is contaminated (wrong comp period, including non-performance pay). Fix the plan or exclude the non-performance layers before next review.

The tenure trap

Most correlation weakness traces to tenure-driven base differentials. A 5-year rep at $140K base earning $200K total with 90% attainment can earn more than a 1-year rep at $100K base earning $180K total with 110% attainment — even though the newer rep is outperforming. If the base delta is justified by role complexity, fine; if it's pure seniority pay, the plan is rewarding tenure, not performance. Be explicit about which you intend.

Need to defend or redesign your P4P story?

We help SalesOps teams audit P4P alignment and design structural fixes that restore correlation without disrupting plan stability. Book a 20-minute review.

Book a 20-minute consultation →

FAQ

Should I include base salary in "total comp" or just variable?

Include total comp (base + variable + SPIFFs). Reps see their total pay, not their variable alone, when evaluating fairness. If you want to isolate plan performance from base differentials, run separately on variable only — but total comp is the number that drives rep perception.

How many reps do I need?

Minimum 5 for correlation to be statistically meaningful. 10+ gives reliable trend fit. Below 5, the correlation is heavily influenced by any single outlier — interpret the scatter visually rather than relying on the correlation number.

What if my roles have different pay structures?

Run separately per role. Mixing primary AEs and overlay SEs in one plot produces a meaningless correlation — the two roles have different pay-to-attainment ratios by design. Same-role analysis is what reveals plan-design issues.

How does this pair with the Attainment Distribution Analyzer?

Attainment Distribution Analyzer tells you whether attainment itself is well-distributed. This tool tells you whether pay tracks attainment. A healthy team has both: well-distributed attainment AND strong pay correlation. Weak on either dimension tells you which problem to fix first.