1. Overview

This document helps explain why simulation results might differ from what you see in the Business Insights (BI) module even for the same time period. These differences happen because simulation and BI use different methods and assumptions.


2. Understanding the terms involved

A simulation is a predictive estimate of performance under a specific set of conditions. 
The Business Insights module, in contrast, reflects fitted historical performance using actual past data.  
Simulation includes,
  • A baseline, which represents the expected outcome if no changes are made.  
  • A scenario, where you can modify drivers such as media spend, pricing, or competitor activity.  
The simulation baseline is not a reflection of “what happened in the past” but rather a prediction of “what will happen in the future if conditions mirror the past.”  It incorporates forecasted inputs and structural assumptions, which differ from actuals.  


Also note that, a forecast is a data-driven projection for future values, used when actual data is not provided. Forecasts are generated per variable using historical trends and domain-specific logic.
In brief,

Feature

Business Insights (BI)

Simulation

What it shows

What happened (historical outcomes)

What could happen under certain assumptions

Input data

Uses actual recorded data

Uses a mix of actuals and forecasts

Use case

Accurate historical reporting

Scenario testing and future planning


3. Why Would Results Be Different?

When running a simulation for a period that’s already covered in the Business Insights (BI) module (e.g. January–March 2024), you might notice that the results don’t match exactly. 
Here are the main reasons why simulation results differ from BI, explained in depth and with examples.


A. Forecasted Inputs in Simulations

Simulations often rely on predicted values for certain inputs, especially if those values aren’t manually entered. This is different from the Business Insights (BI) module, which only uses actual historical data.


Common values that are forecasted:

  • Media reach efficiency (Cost per impression rate), how many people your ads reach for the money you spend (technically known as the To-net factor)
  • Competitors spend, If not provided, the system forecasts based on prior trends 
  • Pricing and promotion levels  
  • Brand strength or reputation scores  
  • External factors, like inflation or consumer sentiment


Why this causes a difference:

These forecasts are generated by internal algorithms designed for consistency and future-readiness, not to perfectly replicate history.
Let us consider an example,
The system estimates how many impressions your media spend will generate using a media cost per impression rate.


Media impressions = Media cost ÷ Cost per impression
If this rate is forecasted to be too high, the system assumes your ads reached fewer people even if your media cost was the same as before.
Let’s say you spent the same amount on media in January 2025 as you did historically $10,000. But in the simulation, the system uses a forecasted cost per impression (also known as a “media efficiency rate”) that is higher than what actually happened in the past.


  • Historical cost per impression: $0.0024
 You’d get about 4.17 million impressions ($10,000 ÷ 0.0024) 
  • Simulated cost per impression: $0.0042 (about 75% higher)
 You’d get only 2.38 million impressions ($10,000 ÷ 0.0042)




So even though you spent the same $10,000, the system assumes your ads reached fewer people in the simulation, just because of a higher forecasted cost per impression.
That’s why your observed media effect may look lower than what you see in Business Insights.


B. Padding and Carryover Effects

Simulations are designed to reflect how real-world marketing activities have effects that linger over time. To capture this behavior, simulations include extra time around your selected date range known as padding.


What is Padding?

  •  The system typically adds one month of data before and after the simulation period.  
  •  If real data is not available for those extra weeks, the system fills them with either forecasted values or zeroes.  
  •  These extra inputs can still influence results inside your simulation period due to carryover effects—such as media lag, brand momentum, or competitor pressure.


Let us take an example: Competitor Spend


In the Input chart, we see how competitor spend is treated:
  • The simulated dataset includes forecasted values (flat or zero) in the shaded padding areas.  
  • The historical dataset contains actual observed data, even in the same padded regions.  
Within the simulation window (Jan 2025 to Jan 2026), both datasets align-but outside this period, the difference is clear: historical data continues with real values; simulated data drops or flattens, based on padding logic.


How does this impact the effect calculation?



In the Output chart, we see the observed effect of competitor spend:
  • There’s a slight difference in the first few weeks of Jan 2025.  
  • This small variance is caused by carryover effects-the lingering impact of competitor spends from the padded period just before the simulation starts.  
While the difference here is subtle, for variables with stronger carryover (like media), this can result in more noticeable shifts between BI and simulation outputs.


C. Date Shifting and Weekly Alignment

The model uses a consistent week structure (e.g. W01-W52) across years. However, calendar week start dates differ across years, which affects alignment.


Why it matters:

  • In BI, Week 1 of 2024 starts on 1 Jan 2024. 
  • In simulations, Week 1 of 2025 starts on 30 Dec 2024 (to maintain seasonality). 
  • The model remaps historical weeks to preserve seasonal structure, not exact calendar dates.  


Result:

You might compare BI data for Jan 1-7 with simulated data for Dec 30-Jan 5 slightly different date ranges, potentially different results.


4. Final Takeaway

Simulation Baseline = Forecasted scenario
Business Insights = Historical reality
The simulation module is not designed to re-create the past. It provides a modelled view of what is likely to happen under the same or adjusted conditions, using intelligent forecasting and scenario logic.


5. Need Help!

If you notice an unexpected huge discrepancy or want help interpreting simulation results, please contact your platform support or Customer Success team. We’re happy to walk through the assumptions and provide data-level clarification.