Most ecommerce forecasts are written from the top down. Someone takes last year's revenue, applies a growth rate that was set in a strategy off-site, and divides it across months using a seasonality curve. The result has a spreadsheet and a chart and zero defensibility. The first time the assumption breaks — paid efficiency drops, a campaign moves a quarter, traffic seasonality shifts — the forecast becomes a fiction the team is still being measured against.

A driver-based model is the opposite shape. You build revenue from its primitives — sessions, conversion rate, AOV — and explicitly model the dependencies between them and the actions you're going to take. When a driver moves, the model reruns. When leadership asks "what if we cut paid spend 20%?", you can answer in two minutes with a number, a confidence range, and the second-order effects on contribution margin. That's the difference between a forecast that drives decisions and one that survives them.

Why top-down forecasts always break

Top-down forecasting works in stable, mature markets. Ecommerce is neither. Three structural problems:

The driver tree, end to end

The structure that holds up in practice — every operator should be running it:

Revenue = Σ (Sessions by channel × CVR by channel × AOV by channel)

And below that:

And contribution margin:

CM = Revenue × (1 − variable cost rate) − incremental media spend − fixed program costs

Where variable cost rate is decomposed into product COGS, discount take, return rate (loaded), fulfillment + payment. (See the P&L framework post for the full tree.)

The discipline is that every cell in the model is a calculated number from inputs you can defend. No fudge factors, no "growth assumptions," no top-down overrides. If the model produces a number you don't believe, you find the input that's wrong, not the cell.

Elasticity: the assumption that does the work

The single highest-leverage assumption in any driver-based model is media elasticity — how new sessions and new customers change as you move spend. Operators routinely assume linear scaling ("if we double spend, we double customers") and the model spits out plans that fall apart in execution.

The honest version: paid media has diminishing returns. The shape varies by channel and by brand maturity, but a defensible starting point is a power function:

New customers = a × Spend^b, where b ≈ 0.6–0.85

What that means in practice: if b = 0.75, doubling spend produces 1.68x customers, not 2x. Tripling produces 2.28x. Quadrupling produces 2.83x. The forecast that assumed linear scaling is overstating customers by 18% at 2x spend, 24% at 3x, 30% at 4x. Those are huge errors when stacked up.

You don't need to know your exact elasticity to build the model. You need to express the assumption, then validate it. Three ways:

  1. Geo-holdout history. If you've ever paused or scaled spend in test regions, the lift differential gives you a rough elasticity by channel.
  2. Spend-step regression. Plot weekly spend vs. weekly new customers for the last 18 months. Fit a power curve. The exponent is your elasticity.
  3. Diminishing-returns audit on impressions. If frequency on Meta is climbing while spend grows but unique reach isn't, you're at the elasticity ceiling on that channel and need diversification, not more budget.

Three scenarios, not one

Single-number forecasts give a false sense of certainty. Three-scenario forecasting forces the team to articulate the bands.

Each scenario specifies which drivers move and by how much. Not "everything is 10% worse" — that's a vibe, not a model. "Paid efficiency drops 8% on Meta because of iOS attribution drift; organic flat; email +5% because of new welcome flow" — that's a scenario.

Worked example: Q4 holiday plan, three scenarios

A brand running ~$500K/month base in Q3 plans Q4 with seasonal lift, a paid push, and a new welcome flow. Three scenarios from the same model:

DriverPlanUpsideDownside
Paid spend (Q4)$1.4M$1.4M$1.4M
Paid CPM YoY+18%+12%+30%
Paid sessions2.6M2.8M2.3M
Site CVR (blended)2.40%2.55%2.20%
AOV$92$96$88
Q4 revenue$2.95M$3.45M$2.40M
Discount take11%10%14%
Q4 contribution margin$0.55M$0.78M$0.32M

The plan-to-downside CM gap is more than 40% — far wider than the revenue gap (18%). That's the operating reality the single-number forecast misses. You don't just need a 40% revenue miss to break the year; a moderate miss on multiple drivers can break it without any single driver looking dramatic.

This kind of model also tells you what to do if you start hitting downside drivers. If paid CPM comes in at +30% in October, you have a clear contingency: pull spend from low-elasticity channels, push organic and email harder, and reset the discount cap before the model erodes another point of CM.

Operator's note

The fastest test of whether your forecast is driver-based: ask the question "what changes if paid CPM is up 20% YoY?" If the team can rerun the forecast in under 10 minutes with a documented assumption change, you have a model. If they need a week or have to "rework the spreadsheet," you have a top-down chart with extra steps.

The reforecast cadence

A driver-based model is not a one-time exercise. It's an operating tool that gets reforecast on a known cadence:

Each refresh produces an updated three-scenario forecast. Leadership sees the band, not the false-precision number.

Watch-outs

Don't double-count growth. A common error: assume CVR will rise 10% from CRO improvements and AOV will rise 10% from bundles and repeat rate will rise 10% from email — and stack them all. In reality, those gains often overlap; some buyers would have gotten there without one of the levers. Discount the stack by 20–30% to avoid building a forecast that requires every lever to fire.

Don't forget the cost side scales nonlinearly too. Adding 30% to revenue rarely adds 30% to fulfillment cost — there's a fixed-cost portion that gives you operating leverage. But it usually adds more than 30% to certain costs (customer service, returns) on a heavy paid push because those orders return more.

Don't cling to a broken plan. If actuals diverge from plan by 10%+ for two consecutive months on a key driver, the plan is wrong, not reality. Reforecast and re-anchor compensation/budgets to the new plan, not the old one.

Don't hide the assumptions. Every driver in the model should have a one-line note documenting where the assumption came from. "Paid CVR 1.6% based on 2024 H2 actual + 15bps from prospecting LP test." When the assumption is wrong, you'll know which assumption to revisit instead of arguing about the output.


The reason most ecommerce forecasts feel like guesses is that they're built top-down and tuned to a target rather than built bottom-up from the drivers that actually move the business. The fix is structural, not analytical — the team doesn't need a better data scientist, they need a better model architecture. Build the driver tree, document the elasticity assumptions, run three scenarios, and reforecast on cadence. The output is no longer a number; it's a set of conditional plans that survive contact with reality.