Marketing incrementality experiments

Marketing incrementality experiments

Our approach builds on: Incrementality first. Most data science models lack the concept of causality: ‘Was this customer already going to convert without marketing?’


Use the results together with Marketing Mix Modeling and Attribution modeling to capture true impact.

Our approach builds on: Incrementality first. Most data science models lack the concept of causality: ‘Was this customer already going to convert without marketing?’


Use the results together with Marketing Mix Modeling and Attribution modeling to capture true impact.

Regional incrementality experiments

Regional incrementality experiments

Regional incrementality experiments are one of the most widely used ways to measure the true added value of marketing. They’re simple, transparent, and completely free from the tracking challenges that come with digital platforms.

In this setup, your marketing team pauses a channel in one or more specific regions (for example, certain provinces) for a period of 2-4 weeks. During this time, we estimate a baseline, the number of conversions you would have achieved had marketing stayed live in those regions. Because campaigns are paused, you’ll see fewer conversions than the baseline predicts. The gap between actual conversions and the baseline is the true causal impact of marketing.

Beyond direct lift, these experiments also reveal delayed effects and cannibalization.

Regional incrementality experiments are one of the most widely used ways to measure the true added value of marketing. They’re simple, transparent, and completely free from the tracking challenges that come with digital platforms.


In this setup, your marketing team pauses a channel in one or more specific regions (for example, certain provinces) for a period of 2-4 weeks. During this time, we estimate a baseline, the number of conversions you would have achieved had marketing stayed live in those regions. Because campaigns are paused, you’ll see fewer conversions than the baseline predicts. The gap between actual conversions and the baseline is the true causal impact of marketing.


Beyond direct lift, these experiments also reveal delayed effects and cannibalization.

Platform conversion lift studies

Platform conversion lift studies

Conversion lift studies are experiments run directly within marketing platforms like Meta, Google Ads, or TikTok. They’re one of the most efficient ways to measure the incremental value of your campaigns. Most platforms require a minimum spend to unlock these studies, but the setup is straightforward and designed to give you actionable insights quickly.

These experiments are fast to deploy, making them ideal for campaign-level optimization. Because they can be run at a smaller scale (f.e. per campaign) they’re perfect for frequent testing without slowing execution.

A key advantage is that platforms use ghost ads to simulate exposure for control groups, so you don’t sacrifice revenue as you might with larger holdout-style experiments like regional tests. This makes lift studies a practical way to validate impact continuously, all while staying inside the ecosystem where you spend most of your media budget.

Conversion lift studies are experiments run directly within marketing platforms like Meta, Google Ads, or TikTok. They’re one of the most efficient ways to measure the incremental value of your campaigns. Most platforms require a minimum spend to unlock these studies, but the setup is straightforward and designed to give you actionable insights quickly.

These experiments are fast to deploy, making them ideal for campaign-level optimization. Because they can be run at a smaller scale (f.e. per campaign) they’re perfect for frequent testing without slowing execution.

A key advantage is that platforms use ghost ads to simulate exposure for control groups, so you don’t sacrifice revenue as you might with larger holdout-style experiments like regional tests. This makes lift studies a practical way to validate impact continuously, all while staying inside the ecosystem where you spend most of your media budget.

User split experiments

User split experiments

Sometimes the cleanest way to measure impact is to experiment within your own audience. User split experiments divide customers into test and control groups, often in channels you fully own, like email marketing or push notifications. This lets you measure the exact value of every message, promotion, or touchpoint with precision.

Because these experiments run entirely on your own data, they’re highly controlled and immune to the limitations of platform tracking. But they also demand careful design: creating a truly fair split between groups is critical to avoid bias in your results.

When executed well, user split experiments offer a clear, data-backed understanding of how marketing drives engagement and revenue within your existing customer base.

Sometimes the cleanest way to measure impact is to experiment within your own audience. User split experiments divide customers into test and control groups, often in channels you fully own, like email marketing or push notifications. This lets you measure the exact value of every message, promotion, or touchpoint with precision.

Because these experiments run entirely on your own data, they’re highly controlled and immune to the limitations of platform tracking. But they also demand careful design: creating a truly fair split between groups is critical to avoid bias in your results.

When executed well, user split experiments offer a clear, data-backed understanding of how marketing drives engagement and revenue within your existing customer base.

Spend variance Experiments

Spend variance Experiments

Spend variance experiments are not designed to deliver a single, clear “test result.” Instead, they are a strategic tool to strengthen your Marketing Mix Model (MMM) and improve the reliability of your long-term decision-making.

In reality, marketing spend is rarely random. Budgets often rise when demand is already high, and drop when the market softens. This makes it hard for models to separate the true effect of marketing from external factors. By deliberately increasing or cutting spend at unexpected times you create valuable variation that your MMM can use to produce more accurate insights.

These carefully designed fluctuations may feel unnatural in the short term, but they pay off in clarity. With stronger model confidence, you can pinpoint where growth truly comes from, allocate budgets with precision, and uncover opportunities that standard campaign reporting would miss.

Spend variance experiments are not designed to deliver a single, clear “test result.” Instead, they are a strategic tool to strengthen your Marketing Mix Model (MMM) and improve the reliability of your long-term decision-making.

In reality, marketing spend is rarely random. Budgets often rise when demand is already high, and drop when the market softens. This makes it hard for models to separate the true effect of marketing from external factors. By deliberately increasing or cutting spend at unexpected times you create valuable variation that your MMM can use to produce more accurate insights.

These carefully designed fluctuations may feel unnatural in the short term, but they pay off in clarity. With stronger model confidence, you can pinpoint where growth truly comes from, allocate budgets with precision, and uncover opportunities that standard campaign reporting would miss.

Spend variance experiments are not designed to deliver a single, clear “test result.” Instead, they are a strategic tool to strengthen your Marketing Mix Model (MMM) and improve the reliability of your long-term decision-making.

In reality, marketing spend is rarely random. Budgets often rise when demand is already high, and drop when the market softens. This makes it hard for models to separate the true effect of marketing from external factors. By deliberately increasing or cutting spend at unexpected times you create valuable variation that your MMM can use to produce more accurate insights.

These carefully designed fluctuations may feel unnatural in the short term, but they pay off in clarity. With stronger model confidence, you can pinpoint where growth truly comes from, allocate budgets with precision, and uncover opportunities that standard campaign reporting would miss.