증분성 테스트: 마케팅 어트리뷰션의 수준을 한 단계 끌어올리기

In the ever-evolving landscape of digital marketing, accurately measuring the impact of marketing efforts has become increasingly complex. Traditional attribution models, while valuable, often fall short in providing a true picture of marketing effectiveness. This is where incrementality testing emerges as a powerful methodology that goes beyond correlation to establish causation, helping marketers determine which activities genuinely drive business results.

As privacy regulations intensify and third-party cookies face deprecation, Attrisight recognizes that marketers need more sophisticated approaches to understanding marketing impact. In this comprehensive guide, we’ll explore how incrementality testing works, how it complements other attribution methods, and how to implement it effectively in your marketing strategy.

Table of Contents

Understanding Incrementality Testing: The Causal Approach to Attribution

What Is Incrementality Testing?

Incrementality testing is a scientific methodology for measuring the true causal impact of marketing activities on business outcomes. Unlike traditional attribution models that assign credit based on touchpoints in the customer journey, incrementality testing isolates the net effect of a specific marketing activity by comparing outcomes between a test group (exposed to the marketing effort) and a control group (not exposed).

The fundamental question incrementality testing answers is: “What would have happened if we hadn’t run this marketing activity?” This reveals the true incremental value—the additional conversions, revenue, or other desired outcomes that occurred solely because of the marketing initiative.

According to Johnson et al. (2017), incrementality testing provides “a measure of the true causal impact of advertising, unbiased by the correlation between ad exposure and outcomes that afflicts observational methods.”

The Incrementality Formula

The basic formula for calculating incrementality is:

Incrementality = (Test Conversion Rate - Control Conversion Rate) / Test Conversion Rate

For example, if your test group has a 1.5% conversion rate and your control group has a 0.5% conversion rate:

Incrementality = (1.5% - 0.5%) / 1.5% = 66.7%

This means that 66.7% of the conversions in your test group were genuinely caused by the marketing activity being tested, while the remaining 33.3% would have happened regardless of exposure.

The Limitations of Traditional Attribution Models

Traditional attribution models like first-touch, last-touch, and even multi-touch attribution face several limitations that incrementality testing addresses:

  1. Correlation vs. Causation: Attribution models show correlation between touchpoints and conversions but cannot establish causation.

  2. Selection Bias: Users who see ads may be inherently more likely to convert, regardless of ad exposure.

  3. Platform Bias: Platform-specific attribution models typically overstate their impact by taking credit for conversions that would have happened anyway.

  4. Cookie Limitations: The deprecation of third-party cookies and increased privacy regulations are limiting the effectiveness of traditional attribution methods.

  5. Channel Silos: Each channel’s reporting exists in isolation, creating a fragmented view of marketing performance.

As Barajas et al. (2016) note in their research on experimental designs for online advertising attribution, “Attribution models based solely on observational data struggle to distinguish between correlation and causation, potentially leading to suboptimal budget allocation decisions.”

Incrementality Testing vs. Other Measurement Approaches

To understand incrementality testing’s role in the marketing measurement ecosystem, let’s compare it with other common approaches:

Approach Primary Function Time Horizon Key Advantage Key Limitation
Last-Touch Attribution Attributes conversion to final touchpoint Short-term Simple implementation Ignores influence of earlier touchpoints
Multi-Touch Attribution (MTA) Distributes credit across touchpoints Short-term Recognizes multiple influences Still correlative, not causal
Incrementality Testing Measures causal impact through experiments Mid-term Establishes true causal impact Requires significant traffic volume
Marketing Mix Modeling (MMM) Analyzes impact of marketing variables on outcomes Long-term Incorporates external factors Limited granularity at campaign level

How Incrementality Testing Complements Attribution Models

Incrementality testing doesn’t replace attribution models—it complements them by providing a truth check on their findings. A comprehensive measurement strategy might include:

  1. Attribution models for day-to-day optimization and understanding customer journeys
  2. Incrementality testing for validating channel effectiveness and making strategic budget decisions
  3. Marketing mix modeling for long-term planning and understanding broader market effects

As Lewis and Rao (2015) observe in their research on measuring returns to advertising, “The combination of experimental and observational methods provides the most complete picture of advertising effectiveness.”

Types of Incrementality Testing Methodologies

Several methodologies exist for implementing incrementality testing, each with its own strengths and limitations:

1. Intent-to-Treat (ITT) Testing

Also known as audience holdout testing, ITT involves withholding ads from a randomly selected portion of your target audience (the control group). The remaining audience (the test group) receives ads as usual.

Advantages:

  • Relatively easy to implement
  • No additional cost beyond normal ad spend
  • Can be implemented through most ad platforms

Disadvantages:

  • “Noisy” data when many users in the test group aren’t actually exposed to ads
  • Requires large sample sizes for statistical significance

2. PSA (Public Service Announcement) Testing

In PSA testing, the control group receives non-commercial public service announcements instead of brand ads. This ensures both groups are exposed to some form of advertising.

Advantages:

  • Reduces selection bias (both groups see ads)
  • Creates more comparable test and control groups

Disadvantages:

  • Costly (paying for control group impressions)
  • PSA content differs from brand content, potentially creating bias

3. Ghost Ads

Ghost ads are a sophisticated approach where the ad delivery system identifies users who would have been shown an ad but instead shows them nothing or another advertiser’s ad. The system records these “ghost impressions” for analysis.

Advantages:

  • Eliminates noise by identifying exactly which control users would have seen ads
  • No additional cost for control impressions
  • Reduced selection bias

Disadvantages:

  • Requires integration with ad platforms
  • More complex implementation

4. Ghost Bidding

Ghost bidding is a variation of ghost ads specifically designed for programmatic advertising. The system places “ghost bids” for control group users without actually winning the impressions.

Advantages:

  • More precise for retargeting campaigns
  • Significantly reduces noise in the data
  • Cost-effective

Disadvantages:

  • Requires deep technical integration with bidding systems
  • Limited to programmatic channels

5. Geo-Experimentation

Geo-experimentation involves selecting geographic areas as test and control regions, running campaigns in test regions while withholding them in control regions.

Advantages:

  • Can measure impact across multiple channels simultaneously
  • Useful for measuring offline and online impacts
  • Works well for omnichannel campaigns

Disadvantages:

  • Requires geographic areas with similar characteristics
  • May be affected by regional variables
  • Needs significant scale to be effective

According to research published by Barajas et al. (2021) in the ACM journal, advanced incrementality testing methods using ghost bidding can reduce the advertiser budget required to reach statistical significance by up to 85% compared to traditional methods.

Implementing Incrementality Testing: A Step-by-Step Guide

Implementing incrementality testing requires careful planning and execution. Here’s a framework for conducting effective incrementality tests:

1. Define Clear Objectives and Hypotheses

Start by clearly defining what you want to test and what questions you’re trying to answer:

  • Which specific channel or campaign are you testing?
  • What is the key performance indicator (KPI) you’re measuring?
  • What is your hypothesized incrementality?

Example hypothesis: “Increasing our paid search spend on brand terms by 30% will generate incremental conversions with positive ROI.”

2. Design the Experiment

The experiment design is critical for generating reliable results:

Sample Size Calculation: Determine the minimum sample size needed for statistical significance based on:

  • Expected conversion rates
  • Minimum detectable effect size
  • Desired confidence level (typically 95%)
  • Statistical power (typically 80%)

Randomization Strategy: Choose an appropriate randomization approach:

  • User-level randomization
  • Geographic randomization
  • Time-based randomization

Control Group Size: Typically, control groups range from 10% to 50% of the total audience. Smaller businesses may need larger control groups to achieve statistical significance.

3. Set Up Proper Measurement Infrastructure

Ensure you have the right tracking and measurement systems in place:

  • Configure analytics platforms to segment test and control groups
  • Implement appropriate tagging for conversion tracking
  • Set up dashboards for monitoring test progress
  • Consider using specialized incrementality testing platforms

4. Execute the Test

When running the test:

  • Maintain strict separation between test and control groups
  • Run the test long enough to capture the full customer journey
  • Avoid making other significant marketing changes during the test period
  • Monitor for any anomalies or technical issues

5. Analyze Results and Extract Insights

After the test concludes:

  • Calculate incrementality using the formula provided earlier
  • Determine statistical significance (p-value)
  • Calculate confidence intervals
  • Translate results into business metrics (ROI, ROAS, etc.)

6. Take Action and Iterate

Based on the test results:

  • Adjust budgets for channels/campaigns with proven incrementality
  • Consider scaling back channels with low incrementality
  • Design follow-up experiments to further optimize
  • Implement continuous testing cycles

Real-World Incrementality Testing Case Studies

Case Study 1: E-Commerce Retailer Evaluates Social Media Impact

Challenge: A mid-sized e-commerce retailer was investing heavily in social media advertising but was unsure of its true impact beyond the platform-reported conversions.

Approach: They implemented an incrementality test using ghost bidding on their Facebook campaigns, allocating 20% of their audience to a control group.

Results:

  • Platform attribution claimed a 4.5x ROAS
  • Incrementality testing revealed a 2.2x incremental ROAS
  • 51% of platform-attributed conversions would have happened anyway
  • The brand reduced CPAs by 30% by reallocating budget to higher-performing ad sets

Key Insight: Platform reporting overvalued the campaign impact by nearly double, leading to suboptimal budget allocation.

Case Study 2: SaaS Company Resolves Attribution Conflict

Challenge: A B2B SaaS company had conflicting attribution data between Google Analytics (which credited organic search) and paid media platforms (which claimed the same conversions).

Approach: They conducted a hold-out test on their paid search campaigns for non-branded terms, temporarily pausing campaigns for a randomly selected 30% of their target audience.

Results:

  • Non-brand search terms showed only 12% incrementality
  • Brand terms showed 68% incrementality
  • Shifting budget from low-incrementality to high-incrementality campaigns increased overall lead volume by 24%

Key Insight: Many of the non-brand paid search clicks would have resulted in organic clicks anyway, revealing significant budget waste.

Case Study 3: Incrementality Testing Reveals True Impact of Display Advertising

Challenge: An insurance provider questioned the value of display and native programmatic advertising, which showed poor performance in last-touch attribution reports.

Approach: They implemented ghost bidding incrementality testing across their programmatic campaigns to measure true incremental impact.

Results:

  • Last-touch attribution undervalued display and native ads by 87%
  • The campaigns showed a positive incremental ROI despite appearing ineffective in attribution reports
  • Budget reallocation based on incrementality insights increased total conversions by 31%

Key Insight: Attribution models significantly undervalued upper-funnel activities that influenced conversions but weren’t the last touch.

Challenges and Best Practices in Incrementality Testing

Common Challenges

1. Statistical Significance

Achieving statistical significance requires sufficient sample sizes, which can be challenging for businesses with limited traffic or conversion volumes.

Best Practice: Consider longer test durations, larger control groups, or focusing on higher-volume segments first. Use proper power analysis to determine minimum required sample sizes.

2. Test Contamination

External factors or changes to other marketing activities during the test period can contaminate results.

Best Practice: Establish a “test window” where other marketing activities remain constant. Monitor for unusual external events and account for them in analysis.

3. Selection Bias

Users who see ads may be inherently different from those who don’t, creating bias in the results.

Best Practice: Use randomization at the user level when possible, and consider advanced methods like ghost ads to reduce selection bias.

4. Measurement Across Multiple Devices and Platforms

Tracking users across devices and platforms remains challenging, particularly with increasing privacy restrictions.

Best Practice: Consider probabilistic matching methods, focus on logged-in experiences, or use geo-experimentation which is less dependent on user-level tracking.

Best Practices for Successful Incrementality Testing

1. Start With High-Impact Channels

Begin incrementality testing with your highest-spend or most strategically important channels to realize the greatest potential impact.

2. Implement Continuous Testing Cycles

Rather than one-off tests, establish a program of continuous incrementality testing to account for changing market conditions and consumer behavior.

3. Test at Different Funnel Stages

Don’t limit incrementality testing to bottom-funnel activities. Test upper-funnel campaigns to understand their true contribution to the customer journey.

4. Combine With Other Measurement Approaches

Use incrementality testing as part of a comprehensive measurement framework that includes attribution modeling and marketing mix modeling.

5. Focus on Business Outcomes

Connect incrementality results to actual business outcomes like profit, customer lifetime value, and market share—not just conversion rates or click-through rates.

The Future of Incrementality Testing

As the marketing measurement landscape continues to evolve, several trends are shaping the future of incrementality testing:

1. Privacy-First Incrementality Methods

With increasing privacy regulations and the deprecation of third-party cookies, new incrementality testing approaches that don’t rely on user-level tracking are emerging. These include enhanced geo-experimentation methodologies and federated learning approaches.

2. AI-Powered Incrementality Analysis

Machine learning algorithms are enhancing incrementality testing by:

  • Identifying optimal test designs
  • Detecting patterns in noisy data
  • Predicting incrementality across untested segments
  • Automating the analysis and interpretation of results

3. Integrated Measurement Frameworks

The future lies in unified measurement approaches that combine incremental testing with attribution modeling and marketing mix modeling to provide a complete picture of marketing effectiveness.

As Gordon et al. (2019) note in their research comparing measurement approaches at Facebook, “The most robust insights come from triangulating multiple measurement methodologies, each with different strengths and biases.”

4. Real-Time Incrementality Insights

Advances in data processing and experimental design are moving incrementality testing from periodic experiments to continuous, near real-time incrementality measurement.

Conclusion: Elevating Your Attribution with Incrementality Testing

In today’s complex marketing ecosystem, understanding the true impact of your marketing efforts is more important—and more challenging—than ever. Traditional attribution models provide valuable insights but fall short of establishing the causal relationship between marketing activities and business outcomes.

Incrementality testing fills this gap by applying scientific experimental design to marketing measurement, allowing you to definitively answer the question: “What would have happened if we hadn’t run this marketing activity?”

By incorporating incrementality testing into your measurement strategy, you can:

  1. Make more informed budget allocation decisions based on proven causal impact
  2. Validate or challenge the findings from attribution models
  3. Identify which audiences, channels, and campaigns deliver genuinely incremental results
  4. Build a more accurate understanding of marketing effectiveness across your organization

As privacy regulations intensify and traditional attribution methods face growing limitations, incrementality testing is becoming an essential component of sophisticated marketing measurement frameworks. By mastering this approach now, you’ll be well-positioned to maintain measurement capabilities and competitive advantage in the privacy-first future.

Ready to take your attribution to the next level with incrementality testing? Attrisight offers solutions designed for privacy-respecting measurement that address the challenges discussed in this article. Explore our multi-touch attribution capabilities and learn how our privacy-first approach can help you implement effective incrementality testing.

Academic References

  1. Barajas, J., Akella, R., Holtan, M., & Flores, A. (2016). “Experimental designs and estimation for online display advertising attribution in marketplaces.” Marketing Science, 35(3), 465-483.

  2. Barajas, J., Bhamidipati, N., & Shanahan, J. (2021). “Incrementality Testing in Programmatic Advertising: Enhanced Precision with Double-Blind Designs.” Proceedings of the Web Conference 2021, 3053-3061.

  3. Gordon, B. R., Zettelmeyer, F., Bhargava, N., & Chapsky, D. (2019). “A comparison of approaches to advertising measurement: Evidence from big field experiments at Facebook.” Marketing Science, 38(2), 193-225.

  4. Johnson, G. A., Lewis, R. A., & Nubbemeyer, E. I. (2017). “Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness.” Journal of Marketing Research, 54(6), 867-884.

  5. Lewis, R. A., & Rao, J. M. (2015). “The Unfavorable Economics of Measuring the Returns to Advertising.” The Quarterly Journal of Economics, 130(4), 1941-1973.

  6. Li, H., & Kannan, P. K. (2014). “Attributing Conversions in a Multichannel Online Marketing Environment: An Empirical Model and a Field Experiment.” Journal of Marketing Research, 51(1), 40-56.

  7. Berman, R. (2018). “Beyond the Last Touch: Attribution in Online Advertising.” Marketing Science, 37(5), 771-792.