Why Your Sales Forecast Is Wrong — and What to Do About It
More than 80% of companies missed their sales forecast in at least one quarter over the last two years, according to Gong's 2024 research across more than 2,000 business leaders. A separate Xactly benchmark report found that 4 in 5 sales and finance leaders missed a quarterly forecast in the past year — with over half missing it twice or more. Less than 20% of sales leaders in a Challenger survey rated their forecasting as genuinely predictable.
These are not small numbers. And in the current economic environment, where every dollar of misallocated resource compounds, forecast accuracy is not a sales operations concern. It is a business health metric. A missed forecast is not just a bad quarter. It triggers over-hiring or under-hiring, misaligned marketing spend, stalled investment decisions, and the kind of confidence erosion with boards and investors that is slow to rebuild.
The standard response to chronic forecast failure is to blame the people doing the forecasting. The optimist who keeps promising deals that do not close. The sandbagging rep who sits on wins until the last week and then "beats" the number by surprise. The manager who hedges everything and offers a range so wide it provides no actual visibility. These behaviours are real, they are common, and they are treatable. But they are not the root cause. The root cause is that most organisations treat forecasting as a report rather than a discipline. They ask for numbers without building the habits that make numbers accurate.
Forecasting is a skill. Skills can be taught. And the way you teach forecasting is not through better templates or more sophisticated software. It is through a weekly cadence of accountability so consistent, so specific, and so well-designed that it changes how every rep thinks about their pipeline — and, over time, reveals far more about deal quality, process rigour, and individual character than any performance review ever will.
The discipline: weekly, monthly, quarterly
The foundation is a weekly forecast call. Not monthly. Not "when the quarter gets close." Weekly.
This feels like a lot until you understand what the weekly cadence actually produces. A rep who knows their forecast will be reviewed — specifically, with named deals, specific amounts, and a close date — every single week develops a completely different relationship with their pipeline than a rep who submits a number to a spreadsheet once a month. The weekly rhythm forces honesty at the point when honesty is still useful. A deal that is showing warning signs in week two of the month can be addressed. The same deal flagged in week four, two days before the quarter closes, cannot.
The weekly call is the smallest unit of accountability that actually changes behaviour. It is also the most informative diagnostic tool a sales leader has. Every week, without exception, the rep is telling you something about how they understand their business — whether they know their deals, whether they are reading the buyer accurately, whether they are being straight with you or managing the narrative. Nothing an unstructured quarterly conversation can produce compares to twelve weeks of weekly data on the same person.
Monthly reviews aggregate the weekly discipline into patterns — which deals moved, which slipped, which were called correctly and which were not. Quarterly reviews translate the patterns into business planning. Each layer does a different job. The weekly call is the engine. Monthly and quarterly reviews are how you use the data the engine produces.
The system: green, yellow, red
The most powerful element of a well-run forecast cadence is not the call itself. It is what you track between calls.
Here is the system I used. Every week, each rep's forecast is scored against their actual result. Green means they came in between 90% and 120% of what they called — accurate, within a reasonable band. Red means they came in below 90% — they missed. Yellow means they came in above 120% — they sandbagged, and the business was under-resourced for the upside they delivered.
Yellow deserves its own mention because most leaders celebrate a rep who beats their number. The instinct is understandable. It is also wrong.
A rep who consistently calls $400,000 and closes $600,000 is not a hero. They are a reliability problem — and the damage runs further up the organisation than most sales leaders appreciate. The business could not plan for that revenue. Marketing did not support the close. Customer success was not staffed for the onboarding. Headcount was not approved because the forecast did not justify it. And the CFO, presenting to the board or to investors, built their model on what the sales team said — which turned out to be wrong in the optimistic direction.
Beating forecast consistently is not a neutral event at the executive level. Boards and investors do not simply celebrate the upside surprise. They ask why the organisation did not understand its own business well enough to call it accurately. They question whether leadership has real visibility or whether the numbers they receive are managed rather than honest. In publicly traded companies, forecast misses in either direction move share price — the market punishes a miss below expectation, but it also questions the credibility of a management team that systematically under-calls. The sandbagger who feels safe because they always beat their number is obscuring signal the entire organisation needs to function well. A sandbagging culture, left unchallenged, produces a forecast that nobody trusts and a planning process built on comfortable fiction.
The green boxes are tracked over time, by rep, on a simple stack rank. Who has the most green boxes over the last eight weeks? Twelve weeks? The stack rank is visible to the team. Reps compete for green boxes the way they compete for revenue — because green boxes represent something real: accuracy, credibility, and the trust of the organisation. The leader who calls their number correctly, week after week, is not just a good forecaster. They are someone whose judgment can be relied upon. That matters for their career in ways a single strong quarter never will.
Gamification research supports what most experienced leaders already know intuitively: visible competition around the right metrics changes behaviour. A Forbes study of 100 sales directors found that 90% observed a positive impact on sales performance after implementing gamification, with 95% reporting improvements in team culture. The evidence in sales environments is consistent: when the right metrics are made visible and competitive, behaviour follows. The green box competition is not a distraction from revenue. It is a mechanism that teaches the habits that produce revenue reliably over time.
I first built this system with the team I wrote about in the previous post — the one where every single person made President's Club in the same year. The green box competition was part of what made that team extraordinary, but what I did not anticipate was what the forecast calls would become. They started as a management tool. They became something the team genuinely looked forward to.
The format was simple. The call opened the same way every week: how did you finish last week — green, yellow, or red? What did you learn? What are you calling this week? But it was done as a team, not rep by rep with a manager taking notes. Everyone heard everyone else's call. Everyone saw the stack rank update in real time. And something happened that I did not engineer: they started coaching each other. When someone was red, a teammate would ask the question before I did. When someone was yellow two weeks running, the team called it out with the same warmth they would have used to say "we need your real number." When someone flagged a risk and asked if anyone could cover, three hands went up before the question was finished.
Eventually the calls attracted people from outside the sales team. Inside sales came. Marketing came. Sales engineers, product specialists, people from other functions who wanted to be in the room where the business was being run honestly and enthusiastically at the same time. I do not know of many forecast calls that people from outside the team voluntarily attend. These ones filled up. Not because the agenda was different. Because the culture was.
I still miss those calls.
The follow-up: the most important part
Everything described so far — the weekly cadence, the green-yellow-red scoring, the stack rank — is infrastructure. Infrastructure matters. But infrastructure without follow-up is decoration.
The most critical function in the entire forecasting system is what happens in the one-on-one after a miss. Not the forecast call. The follow-up conversation. And it applies equally to both kinds of miss: the rep who came in red, and the rep who came in yellow. Both missed. Both owe an explanation. Both present a coaching opportunity.
The question is always the same: what happened to create the miss, and what do we do differently next week to prevent it?
This question is not punitive. It is genuinely curious. The answer almost always reveals something specific about the deal or the process. The buyer went quiet because the rep had not confirmed the economic buyer's involvement. The deal pushed because the close date was based on the rep's hope rather than a documented compelling event. The sandbagger did not call the deal because they did not trust their own read of where the buyer was — which means either the discovery was incomplete or the qualification framework was not being applied.
In almost every case, a forecasting miss traces back to a gap in process — specifically, a gap in MEDDPICC or whatever qualification framework the team uses. The rep who cannot accurately call their deal is the rep who does not fully understand their deal. The forecast teaches this in a way that no standalone training session ever can, because the lesson arrives in the context of a real pursuit with real consequences. The rep who misses their call twice in a row on the same kind of deal — a deal where the champion has not been validated, or where the timeline is assumed rather than confirmed — learns something in those two weeks that will change how they qualify pursuits for the rest of their career.
This is why rewarding early warning is as important as rewarding accuracy. The rep who comes to the forecast call and says "I need to flag — the deal I called for this week is at risk, can anyone help cover?" is doing something courageous in a commission-driven culture. That rep is putting the team above their own pride. That rep should be celebrated, not quietly noted. When early warning becomes normal — when the team's response to "I'm going to miss" is "who has something they can pull forward?" rather than silence or judgment — you have built something rare and genuinely useful.
Three meetings, three instruments
By this point in the argument, some leaders are calculating the calendar cost. A weekly funnel review. A weekly deal inspection. A weekly forecast call. That is three recurring meetings, per rep, per week. It sounds like bureaucracy dressed up as rigour.
It is not. Each meeting is measuring something different, and the three together give you a complete picture that no single meeting can provide.
The funnel review is the health of the business. Pipeline constitution, coverage ratios, stage distribution, hygiene. Are we building enough? Is what we are building the right shape?
The deal inspection is the strategy of the pursuit. This specific deal — where are we positioned, what is missing, what is the path to failure? The outside view that the rep inside their deal cannot see themselves.
The forecast call is accountability. What are you calling this week, and what is that call based on? The commitment to a number creates the discipline that makes the other two conversations honest.
Three instruments. Each calibrated differently. A leader who runs all three consistently, week over week, will know their sellers more thoroughly than any annual review process could reveal. They will see patterns — in how deals are qualified, in how risks are managed, in how people respond to pressure and ambiguity. The three meetings do not create bureaucracy. They create visibility. And visibility is the precondition for everything else this series has argued for: accurate coaching, honest culture, and a team that compounds quarter after quarter because the habits are embedded and the standards are clear.
What the data eventually tells you
The three forecast personalities — the chronic optimist, the chronic sandbagger, and the true number-hitter — do not need to be diagnosed. They reveal themselves through the green box stack rank over time.
The optimist is the rep whose red boxes accumulate in patterns. Not random misses, but consistent overestimation on the same types of deals — large enterprise opportunities, late-stage deals without a confirmed economic buyer, pursuits where the rep's enthusiasm for the opportunity has outrun the evidence. The pattern tells you exactly where to focus the coaching.
The sandbagger is the rep with consistent yellow boxes. Always beating the number. Never calling the full picture. The coaching conversation here is about trust — specifically, the rep's trust that accurate forecasting will not be used against them. Sandbagging is almost always a rational response to an environment where beating the number is celebrated and hitting it is treated as mediocre. Change the reward structure and the behaviour follows.
The number-hitter earns green boxes week after week not through luck but through discipline. They know their deals because they have done the discovery. They call what they see because they trust their read of the buyer. They flag risk early because they know the team will help rather than judge. These are the reps who, given the right environment, become the managers and leaders who replicate the standard in the next generation. You cannot sustain accidental success. The number-hitter is not accidentally accurate. They have built the habits that forecasting, done well, is specifically designed to reinforce.
Forecasting is not a report. It is a discipline. Run it weekly, track it honestly, follow up on every miss — and over time it will teach you more about your team, your deals, and your business than any other single management practice you have.
Sources
1Gong, Revenue Forecasting Report (January 2024) — survey of 2,015 business leaders; more than 80% of companies missed their sales forecast in at least one quarter over the last two years2Xactly, 2024 State of Sales Forecasting Benchmark Report — survey of 400 finance, sales, and RevOps professionals; 4 in 5 leaders missed a quarterly forecast in the past year; over half missed it two or more times3Challenger Inc., poll of sales leaders (January 2024) — less than 20% of sales leaders rate their forecasting as predictable4SiriusDecisions, as cited in Forecastio — 79% of sales organisations miss their forecast by more than 10%5Sales Management Association, as cited in Forecastio — companies with accurate sales forecasts are 7.3% more likely to hit quota6Forbes survey of 100 US sales directors — 90% observed positive sales impact after implementing gamification; 95% reported improved team culture and camaraderie
Andrew Devlin is the founder of ScaleTech CRO Ltd. and a fractional VP of Sales working with B2B companies between $10M and $100M. He has led sales organisations at Cisco, Splunk, and Cloudflare, and holds a Certified Advisor and President's Circle designation with Sales Xceleration. He teaches B2B Sales at Okanagan College.