All Blogs

Why Most E-Commerce Forecasts Are Wrong

Why (Useful) Forecasting Is Hard

And why sensitivity analysis alone isn’t enough.

I was recently talking to our forecasting team about some of the challenges that brands doing their own in-house forecasting were running into.

One of the topics that came up was sensitivity analysis and the (often misunderstood) role it plays in effective financial modeling and forecasting.

This powerful technique that is used to guide us in our forecasting can just as easily be used to mislead us if applied incorrectly. So how do we know when to use it — and more importantly, how to use it?

What Is Sensitivity Analysis (And How It Is Used in Forecasting)

Sensitivity analysis measures how changes in one variable affect the outcome of a model, while all other variables are held constant.

For example: Let’s say you wanted to understand how New Customer Contribution Margin (NC-CM) is affected by changes in aMER. You could build a model that shows how every unit of change in aMER impacts NC-CM. In other words, if your new customer acquisition efficiency improves, how much does NC-CM improve?

This is actually a very useful exercise. It helps quantify the risk to your business if acquisition efficiency drops — or identify upside scenarios if performance improves.

But this method has a serious blind spot.

The Problem: Real-World Variables Are Not Fixed

Traditional "one-way" sensitivity analysis relies on one critical (and often flawed) assumption:

That every other variable in the model stays constant.

In theory, that makes sense. But in practice, it doesn’t hold up.

Let’s go back to our example. You’re modeling CM against aMER. But aMER itself is not an isolated input — it’s likely a function of several other metrics:

  • Discount Rate: Are you running an offer that boosts conversion but hurts margin?
  • AOV: Higher or lower average order value changes your blended efficiency.
  • COGS: Product mix can change your unit economics if the preceding change in AOV is based on different products being offered or merchadnised on-site.

All of these affect both aMER and contribution margin. So when you change one, the others likely shift too — sometimes in reinforcing or offsetting ways.

The result? You end up treating dependent variables as independent — and the model starts to break.

Why Single-Variable Forecasting Fails

When your model assumes only one thing moves, it creates blind spots:

  • You model the relationship between aMER and CM, assuming some linear relationship between efficiency and Contribution Margin.
  • However, in reality, your change in aMER may be as a result of other changing factors in your model — seasonal trends, different product mix, therefore different AOV, different offer and discount rate, etc.
  • So while on paper on one variable is changing, in the real world these variables rarely move in isolation.

So the question becomes: Which variables should you hold constant?

So with all of these potentially dependent variables being treated as independent, how do you know which variables to hold constant? Or which to adjust as you adjust your aMER? 

Are any of your other variables truly fixed in a model that assumes a change in the underlying aMER premise?

This is where using sensitivity analysis alone tends to break in practice. But that doesn’t mean it should be abandoned. Instead, it requires us to go a step further in our analysis by trying to understand the true relationship between these variables.

The Alternative: Scenario-Based Forecasting

So what does all this mean for sensitivity analysis? Do we abandon it in our financial modeling? 

Not at all.

The reality is, sensitivity analysis is still a very useful tool – but it’s only one of many in our toolkit. 

Combined with multiple scenario analysis, sensitivity can help highlight some of the impacts of changing variables inside of our financial models. Brands should start by building a base case, and then create best and worst case variations based on a number of possible changes to their underlying scenario. 

Teams can adjust multiple variables – using statistical methods for correlation between variables, or an intuitive understanding of their unit economics and business cycle – to model out changes in outcomes, and overlay sensitivity on top of their models to depict how a narrow set of changes can impact the underlying models’ outcomes. 

1. Start with a Base Case

Model your most realistic expectations — your operating plan. This should reflect known inputs like your offer strategy, ad spend, discount rate, returns, NC-ROAS (also known as New Customer Return On Ad Spend), etc.

2. Create Best- and Worst-Case Scenarios

Model plausible variations around that base. These shouldn’t just toggle one input. They should reflect interrelated shifts — maybe Q4 demand surges, so your CAC improves but COGS rises due to rush fees.

3. Use Sensitivity as a Layer, Not a Backbone

Once your base, best, and worst scenarios are built, use sensitivity analysis to test specific risk factors. Example: In your worst case, if CAC increases 15%, what happens to cash flow?

This layered approach allows for nuance. You don’t treat variables as isolated — you treat them as dynamic parts of a system.

Better Strategy Relies On Better Forecasting

Forecasting isn’t about perfection. It’s about clarity. About understanding the boundaries of what could happen and what decisions are most resilient across that range.

Sensitivity analysis can show you where your model breaks. Scenario analysis can show you how your business bends.

Put them together, and you have a much more honest, robust way to plan.

In a world where nothing moves alone, your forecasts shouldn’t either.

More from our blog.

Dive into the latest insights and trending topics on all things e-commerce.