Unblack-boxing incrementality: Building a transparent testing process

Unblack-boxing incrementality: Building a transparent testing process

Unblack-boxing incrementality: Building a transparent testing process

March 13, 2025

5 min

We keep talking about the importance of un-black-boxing incrementality testing…but what does that actually mean?

For years, marketers have been forced to trust opaque third-party lift studies that provide little to no visibility into their methodologies. These providers dictate the test design, control group selection, and data analysis—without allowing brands to see under the hood. The result? Ambiguous, unverifiable results that can’t be properly optimized against.

Marketers should demand more control and transparency over how their incrementality tests are run. At Gigi, we’ve built a geo-based incrementality testing framework that provides full visibility into how tests are designed, executed, and analyzed. Here’s how we do it.

DMA creation engine: A smarter approach to test design

One of the biggest issues with traditional incrementality testing is the lack of transparency in how control groups are formed. Many third-party studies fail to define who falls into the control vs. experimental group, leading to biased results and inflated performance metrics.

At Gigi, we solve this with our DMA Creation Engine—a fully automated system that curates control and test groups based on a brand’s specific risk profile and parameters. Marketers can define how their DMA groupings are built based on a number of aspects from household inclusions to total sales inclusions across both Amazon and 1P channels (via data collaboration), adjusting the split of holdout vs experimental based on how reserved or bullish they intend to be throughout the test.

Statistical analysis: The foundation of reliable results

Once an incrementality test is live, analyzing results correctly is critical. At Gigi, we apply Difference-in-Differences (DiD) modeling—a rigorous statistical approach used in causal inference studies. With DiD modeling, we can ensure control and experimental groups start at similar sales levels, so the only variable is ad exposure. This also filters out external factors like seasonality and market trends to isolate the true incremental impact of ads. And confidence intervals and p-values help ensure results are statistically significant, not just a random variance. With a look at the statistical analysis of their incrementality testing, marketers can get a clearer picture of how the test performed.

Test outputs: Delivering actionable, transparent insights

To validate media investments, Gigi prioritizes several outputs for brands, including incremental iROAS, lift on revenue, incremental lift percentages and a breakdown of sales by group and channel. In doing this, brands can understand how their STV ads drive incremental omnichannel performance, and have actionable insights to scale their campaigns. For example, a brand running a multi-channel campaign could discover that 70% of its revenue lift came from Amazon, while only 30% came from their DTC site—helping them shift budgets accordingly.

Increase your sales through Streaming TV.

Streaming TV and commerce media insights hand-picked and delivered straight to your inbox every month.

Streaming TV and commerce media insights hand-picked and delivered straight to your inbox every month.

Streaming TV and commerce media insights hand-picked and delivered straight to your inbox every month.