Every major ad platform now offers incrementality measurement. Google has conversion lift studies. Meta has its own lift framework. Amazon has incrementality reports for Sponsored Products. The pitch is always the same: we’ll show you which conversions wouldn’t have happened without our ads.
It sounds like measurement. It is structured like measurement. It uses the vocabulary of measurement — holdouts, lift, incrementality. But when the entity running the test also sells the media being tested, the output is something closer to marketing than science.
Here are five signs that what you’re calling incrementality measurement is actually platform marketing dressed up in methodology.
Sign 1: The Platform Controls the Holdout
Holdout testing is the foundation of credible incrementality measurement. A statistically matched group is withheld from the campaign. After the flight, conversion rates between the exposed group and the holdout are compared. The difference is the true incremental lift.
The problem: when a platform runs that holdout on your behalf, you don’t control who gets withheld, how the groups are balanced, or whether the holdout population is actually comparable to the exposed group. The platform decides. And the platform’s revenue depends on the test finding positive lift.
This isn’t a conspiracy — it’s an incentive structure. No publicly traded ad company builds measurement tools optimized to shrink its own revenue. When you don’t control the holdout, you don’t own the result.
What real holdout control looks like: The advertiser defines the holdout group. The holdout is based on the brand’s own audience data, not the platform’s identity graph. The test design is set before the campaign launches and cannot be modified by the platform mid-flight.
Sign 2: The Identity Graph Belongs to the Platform
Incrementality measurement requires knowing which people were exposed to the campaign and which weren’t. That means someone has to define “same person” across devices, sessions, and time. In every platform-native lift study, that job falls to the platform’s identity graph — a proprietary probabilistic model the advertiser cannot inspect or audit.
Google decides which impressions across Search, YouTube, Display, and Gmail belong to the same user. Meta decides which actions across Facebook, Instagram, and the Audience Network belong to the same person. When identity resolution is probabilistic and platform-controlled, the advertiser has no way to verify who was actually in the test.
The alternative is deterministic attribution: matching at the individual level using identifiers the brand owns. In direct mail, this is the native measurement model — a mail file goes to known household addresses, and conversions are matched back against the brand’s own transaction records using name, address, and household. There is no probabilistic middle layer. The brand knows exactly who received the mail and can verify every conversion match against data it controls.
Sign 3: “Incrementality” Is Reported at the Platform Level, Not the Conversion Level
Platform-reported lift studies typically surface an aggregate result: your campaign drove X% incremental conversions. What they rarely show is which specific conversions were incremental, against what baseline, and how that baseline was constructed.
This matters because aggregate lift numbers are easy to manipulate through test design. If the holdout population is slightly lower-intent than the exposed group — even by a small margin — the lift number will be overstated. Without conversion-level data tied to a baseline the brand controls, there is no way to audit the result.
Performance marketers should ask: can I see the conversion-level data that produced this lift number? Can I validate the holdout against my own purchase file? If the answer is no, the “incrementality” number is a platform-reported metric, not a verified measurement.
Sign 4: The Measurement Window Favors the Platform’s Attribution Model
Every ad platform sets a default conversion window — the period after an impression or click during which a conversion is credited to that platform’s campaign. Google’s default search window is 30 days click, 1 day view. Meta defaults to 7-day click, 1-day view. These defaults are not arbitrary; they are calibrated to capture as many conversions as possible within the platform’s attribution model.
When a platform-native lift study uses that same window to define “incremental conversions,” the window itself becomes a variable that inflates results. A conversion that happens 25 days after a Google impression — and might have been driven by an email, a social post, or a direct mail piece — gets counted as a Google-incremental conversion if it falls within the window.
Credible incrementality measurement uses attribution windows set by the advertiser, aligned to the actual purchase cycle of the product. A window that was chosen to maximize platform-reported conversions is not a measurement decision — it is a design decision that serves the platform.
Sign 5: Every Channel Looks Incremental When Measured by Itself
This is the tell that makes the whole system visible. If you run Google’s lift study, Google looks incremental. If you run Meta’s lift study, Meta looks incremental. If you run Amazon’s incrementality report, Amazon looks incremental. Run all three and your budget is 100% incremental — which is mathematically impossible.
Platform-siloed incrementality measurement produces a world where every channel justifies its own spend. That’s not measurement. That’s every walled garden grading its own exam and giving itself an A.
The only way to answer the actual incrementality question — which channels are driving conversions that would not have happened otherwise, across my entire media mix — is with a cross-channel framework that uses a single source of truth: the brand’s own purchase or conversion data, compared across channels using consistent methodology the brand controls.
What Credible Incrementality Measurement Actually Requires
The signs above aren’t a reason to abandon incrementality testing. They’re a reason to own it.
Credible incrementality measurement has three non-negotiable requirements:
The advertiser controls the holdout. Test design is set by the brand, not the platform. Holdout groups are drawn from the brand’s own audience data. The platform cannot modify the holdout after the campaign launches.
Attribution is deterministic, not probabilistic. Conversions are matched at the individual or household level using identifiers the brand owns and can audit. Probabilistic identity graphs controlled by the platform are not a substitute.
Measurement uses a cross-channel baseline. The question isn’t whether Channel X drove lift against a Channel X holdout. The question is whether Channel X drove conversions that wouldn’t have happened in the absence of any spend — measured against a consistent baseline across the full media mix.
This is why Postie builds incrementality measurement as a native, brand-controlled feature — not an add-on and not a silo without your input. Campaigns run alongside a holdout group defined in partnership by the advertiser with their client success expert. Lift is measured against the brand’s own transaction data using deterministic household-level matching. The result is a number the brand can audit, defend to Finance, and compare directly against other channels in the same framework.
If you’re evaluating whether your current incrementality reporting is actually measuring lift or just validating platform spend, see how Postie’s measurement framework is built for this problem.
Want to read more first? Check out our Incrementality Guide.