The single biggest reason AI-generated comp reports flop in the boardroom: the prompt was written for the analyst, not the audience. A CFO doesn't want a 14-bullet attainment summary. A rep doesn't want a TAM breakdown. The same data, told differently, lands very differently.
This tool builds a structured prompt for you. You pick the audience, the report type, and the period. You paste in your data sample. The tool composes a prompt that frames the AI's role, scopes the analysis, specifies the output structure for that audience, and locks the tone — so you get a report that reads like it was written by a senior analyst who knows their reader.
The output is a prompt, not a report. You paste it into ChatGPT, Claude, Gemini, or your tool of choice. Why a prompt? Because your data shouldn't leave your browser to generate the prompt — and the prompt-building logic is reusable across every model and every report you write.
Why this works better than a one-shot prompt
Audience framing changes everything
"Summarise this attainment data" is a bad prompt. "You are reporting to a CFO. Lead with the dollar variance vs. plan, then the top three drivers, then the recommended action. Maximum 200 words. No jargon" is a good prompt. Same data, different audience, different report. This tool injects the audience framing for you.
Output structure prevents wandering
Without a structure, AI will produce a long, balanced narrative that hides the headline. With a structure ("Section 1: Headline. Section 2: Drivers. Section 3: Action"), the model is forced to lead with what matters. The tool generates the right structure for each audience.
Tone & length lock the register
A board update reads differently from a rep nudge. The tool sets explicit tone (formal/conversational/coaching) and length (50/200/500 words) based on audience defaults — you can override either, but the defaults are what works.
It does not call an AI model. It does not send your data anywhere — everything runs in your browser. It builds a prompt you paste into the AI of your choice. That separation matters: it keeps the data path under your control and lets you swap models without rebuilding anything.
AI Report Prompt Builder
Pick audience and report. Paste data. Get a prompt.
ℹ️ How this tool works +
The question it answers: How do I write an AI prompt that produces a comp report my audience will actually read — not a generic data dump?
What to enter:
- Audience — who's reading this (Rep, Frontline Manager, CRO, CFO, CHRO, Board)
- Report type — Attainment summary, Payout analysis, Plan effectiveness, Exception report, or Quota progress
- Period — Month, Quarter, YTD, or Annual
- Headline metric — the one number you want to land first (e.g. "Q3 attainment 87%")
- Data sample — paste a CSV or short summary the AI should analyse. Keep PII out.
- Tone & length — defaults set by audience; override if needed
What you'll get back:
- A copy-ready prompt with audience framing, scope, data, output structure, tone, and length constraints
- A preview of the output structure the prompt will produce (so you know what to expect)
- A short list of data-handling warnings if your sample looks like it might contain PII
Sample inputs are pre-loaded so you can hit Generate immediately and see the prompt shape. Edit any field before you generate.
Output structures and tone defaults reflect Falcon's experience working with comp leaders across stakeholder types. Treat them as starting points — your CFO may want longer; your reps may want shorter. Override the defaults when you know your audience better than the average.
How to use the prompt
Step 1: Sanity-check the data
Before you paste the prompt anywhere, scan the data sample for full names, email addresses, or anything that ties a salary figure to an identifiable individual. Aggregate to region, segment, or tenure. The tool warns you if the data looks risky, but you're the final reviewer.
Step 2: Paste into ChatGPT, Claude, or your enterprise AI
The prompt is structured to work in any frontier model. If your company has a private deployment (Azure OpenAI, Bedrock, in-house Claude), use that — keeps the data path inside your tenancy.
Step 3: Read the output critically
Check three things: (1) the headline matches what you provided, (2) the structure follows the order you requested, (3) no number was invented that wasn't in your data sample. If any of those fail, paste back: "Re-do, but use only the figures in the data block above."
Step 4: Iterate, then template
Once a prompt produces consistently good output for one audience and one report type, save it. The whole point of building prompts this way is reusability — you should be running monthly reports off a saved prompt, only swapping the data block.
AI will sometimes invent figures that sound plausible. The structured prompt this tool builds reduces that risk — it explicitly tells the model "use only the figures provided, don't extrapolate" — but it doesn't eliminate it. Always cross-check the headline number and the top three claims before sending the report onward.
Building an AI reporting workflow?
We help comp ops teams move from one-off prompts to repeatable AI reporting workflows that hold up under stakeholder scrutiny.
Book a 20-minute consultation →FAQ
Two reasons. First, sending your comp data through a third-party tool's API key creates a data-handling problem you don't need. Second, locking you into one model means you'd have to rebuild when your company switches AI vendors. A prompt-only tool stays useful regardless of which AI you use.
Not directly — rep statements need exact calculation accuracy and audit trails AI shouldn't generate from scratch. Use this tool for aggregate reports (region, segment, tenure cohorts). For individual statements, AI should only be drafting the narrative around figures your comp engine has already calculated.
The defaults are starting points. After you generate, paste the prompt into AI, see the output, and add a sentence to the prompt like: "Move the variance section before the drivers section." Iterate until the structure matches what your CFO wants, then save that version as your CFO-monthly template.
Yes — at the bottom of the prompt, add: "Respond in [language]." Most frontier models handle the major business languages well. For regulated regions (EU pay transparency reports, for example), have a native speaker review before sending.