🍊 The Method
Send logs. Get scores. 48 hours.
No black boxes. No hidden methodology. Audit-grade transparency on every segment in your campaign.
🍊 How It Works
Three steps. Two days.
1
You send logs
30 days of impression + conversion logs from any DSP
Day 0
2
We run analysis
Fee-aware contribution scoring for every segment
Day 1
3
You get your receipt
Scores, recommendations, and audit-ready documentation
Day 2
🍊 What You Get
Your deliverables
Segment Inventory
Complete manifest of every audience segment attached to your campaign—costs, providers, coverage.
- All segments mapped
- Fee breakdown by vendor
- Coverage percentages
Contribution Scores
Fee-aware scores for each segment showing its net contribution to outcomes.
- 0-1 score per segment
- Ranking by contribution
- Confidence intervals
Action Recommendations
Clear next steps for each segment based on its score and your campaign goals.
🍊 The Science
Two levels of proof
We are clear about what each method can claim.
Level 1
Contribution Analysis (Default)
"Segment X contributed Y% to measured outcomes."
Permutation-based contribution accounting using your log data. Identifies which segments drive conversions—and where reinvestment opportunities exist.
Use this for: Finding waste, prioritizing tests, defending budget allocation.
No holdout required. Results in 48 hours.
Level 2
Incrementality Testing (Optional)
"Segment X caused Z% incremental lift."
Causal proof via geo-lift or audience holdouts. Gold standard for proving a segment's true impact—not just correlation.
Use this for: Vendor negotiations, budget justification, CFO-level reporting.
Requires designed experiment. 4-6 week timeline.
We built Madhive. We know what questions matter.
Every receipt includes method version, coverage report, and confidence levels. You always know exactly what was measured, how, and what limitations exist.
🍊 For Data Science Teams
Under the hood
The Math
TraceScore uses permutation-based marginal contribution estimation. The value function is fee-aware:
v(S) = attributed revenue − segment fees for impressions where segments in S are present. Credit is distributed based on marginal impact across all possible segment orderings.Key Design Choices
Confidence intervals
Bootstrap (1000 resamples), 80% CIs. Decision-utility over false precision.
Correlated segments
Flagged at >70% co-occurrence. Report individual + combined. Recommend Level 2 to disentangle.
Counterfactual
Observational (matched cohorts). Level 1 is contribution, not incrementality.
Fee integration
Segment costs subtracted before allocation. Net contribution, not gross.
Validation
L1 vs L2 holdout comparisons: 0.7-0.8 rank correlation for clear winners/losers.
When to Elevate to Level 2
Level 1 shows contribution—which segments correlate with outcomes. Level 2 proves causation via holdout testing. Choose based on your goal: fast clarity or definitive proof. Every receipt explains exactly what was measured.
🍊 Independence Guarantee
Independent verification. No conflicts.
First-party analysis of your first-party data. Our only revenue is the verification layer.
100% independent
Your interests only. Full stop.
Platform agnostic
Works with any DSP. Recommend freely.
Zero conflicts
Full alignment with your outcomes.
Math you can audit
Permutation-based. Open methodology.
🍊 FAQ
Common questions
Which DSPs do you support?
Any DSP that can export impression and conversion logs. We have worked with TTD, DV360, Amazon, Xandr, and proprietary systems.
How is this different from Google ADH?
ADH is inside Google's walled garden—it only sees Google inventory. We are independent, cross-platform, and work with any DSP. More importantly, we are answering a different question: not 'what channel drove the conversion' but 'is this segment fee earning its cost?'
What if my data coverage is incomplete?
The Coverage Report tells you exactly what was measured. Partial coverage does not break the analysis—we are transparent about confidence levels.
How do you handle overlapping segments?
We flag segment pairs with >70% co-occurrence and report both individual and combined scores. For highly correlated segments, we recommend Level 2 holdout testing to disentangle true contribution.
Is there a minimum spend threshold?
We recommend $50K+ spend or 1M+ impressions for statistical significance. Smaller campaigns can work but with wider confidence intervals.
How do you protect our data?
SOC 2 Type II certified. NDAs standard. Pseudonymous processing. Data deleted after analysis—only the receipt persists.
What if I already have an MTA model?
TraceScore complements your attribution. It answers the fee question your MTA was not designed for: which segments earn their cost? Your attribution allocates credit to channels. TraceScore audits segment value.
Can I run this without sharing raw logs?
We need impression-level data to calculate contribution. But we only need segment IDs, timestamps, and conversion flags—no PII required. Data is deleted after analysis.
🍊
Get your first analysis in 48 hours
Send us 30 days of logs. We will send you contribution scores.