Four platforms.Four answers.One is real.

Every ad platform reports a different ROAS. None of them agree. Most marketing teams pick the number that sounds right and move on. We are building something that tells you which one to actually trust.

Built for performance marketers and growth leaders who are tired of guessing which number to believe.

lensos — signal reconciliation

$ lensos reconcile --client acme-co --period 2024-Q4

Meta ROAS4.0×[DISPUTED]
Google ROAS3.5×[PARTIAL]
GA4 ROAS2.8×[UNVERIFIED]
MMM iROAS2.0×[CONFIRMED]
Ready.
SCROLL

Why the ROAS number you are looking at is probably wrong

Three structural issues corrupt most marketing measurement reports. They exist regardless of which platform you use, how carefully you built your stack, or how experienced your team is.

01

Attribution overlap

Meta claims the conversion. Google claims it too. Your CRM logs one purchase. Every platform counts full credit for the same sale. So your reported ROAS can be two or three times what you actually earned. The number looks great. The underlying reality does not match it.

02

Platform incentives

Every ad platform has a commercial reason to report strong performance. Their attribution windows and models are designed by the same teams that benefit from a high ROAS number. This is not a conspiracy. It is just a conflict of interest that no one talks about plainly enough.

03

Missing incrementality

ROAS tells you how many dollars came in alongside your ads. It does not tell you how many dollars came in because of them. Without knowing what is truly incremental, you cannot tell whether cutting a channel saves you money or costs you sales. That distinction is worth a lot.

Most marketing teams are making significant budget decisions based on numbers that overstate performance by 40 to 80 percent.

That is not a data infrastructure problem. It is a measurement philosophy problem. And that is exactly what we are building toward.

Same campaign. Four answers.

You run one campaign. Every platform gives you a different number. The delta is not noise. It is a structural problem with how each platform measures success for itself.

Meta
Last-click, 7-day view window
4.0×DISPUTED
Google
Search + Display overlap
3.5×PARTIAL
GA4
Cross-device gaps
2.8×UNVERIFIED
MMM
Causal, but 6-week lag
2.0×CONFIRMED

Three of these numbers are overstating performance. One is grounded in causal evidence. The problem is not having the data. It is knowing which data to trust.

1

You default to the number that confirms what you already believed

Most teams do not average the data. They pick the platform that supports the decision they wanted to make anyway. That is not measurement. It is confirmation bias running on a media budget.

2

Every wrong assumption compounds at scale

If your true ROAS is 2.0× but you believe it is 4.0×, you are doubling down on channels that do not deserve it. At $20K per month, the error is uncomfortable. At $200K per month, it is a serious problem. It rarely gets caught until the damage is done.

3

No one has a clear answer yet

MMM takes six weeks and a consultant. Incrementality tests require traffic volume and patience. Most teams cycle back to the same place: staring at platform dashboards and hoping they are not being misled. We think there is a better path.

62%

of performance teams report making budget decisions they later discovered were based on significantly overreported attribution data.

We don't pick a winner.
We triangulate.

No single source is right. Every source has a known bias and a known coverage gap. MeasureLens weights each signal by its evidence credibility and reconciles them into one truth.

INPUT SIGNALS

Platform DataBiased

Fast, granular, biased

AnalyticsIncomplete

Broader view, incomplete

MMM / ModelsSlow

Causal, lags by weeks

ExperimentsSparse

Ground truth, limited coverage

EVIDENCE SCORING (TIER 1–4)

T1
RCT / Geo Holdout
T2
Calibrated MMM
T3
Uncalibrated MMM / MTA
T4
Platform ROAS only

✓ RECONCILED OUTPUT

2.4×True iROAS
MCS 78/100

Every source rated

Each data input receives an evidence tier (T1–T4) and a staleness penalty. Causal evidence always outranks correlation.

Conflicts surfaced

When Meta and MMM disagree by >20%, we flag it, explain the likely cause, and show you the gap. No silent averaging.

Confidence, not certainty

We give you a Measurement Confidence Score (0–100). Low scores trigger a recommendation to run an experiment before scaling.

Not a dashboard.
A reasoning engine.

LensOS is the core intelligence layer of MeasureLens. It does not display your data. It evaluates it. It weighs competing sources of evidence, flags what is in conflict, and produces an answer you can act on. That distinction is the whole point.

LENSOS SYSTEM DESIGN

Signal normalizationIN DEVELOPMENT
Evidence tier scoringIN DEVELOPMENT
Conflict resolution logicIN DEVELOPMENT
Confidence computationIN DEVELOPMENT
Decision synthesisIN DEVELOPMENT
STEP 01Bring every signal together

Ingest

The starting point is unifying data that was never designed to talk to each other. Platform exports, model outputs, and experiment results all use different naming conventions, different time windows, and different definitions of a conversion. The first step is getting them into a common language so they can actually be compared.

COMPONENTS

MetaGoogle AdsGA4MMM outputsMTA feedsExperiment results

How we think about the problem

Most attribution tools add more data to an already confusing picture. We are building something different. Something that evaluates the quality of the data you already have, and tells you which parts of it to trust.

01

Not all evidence is equal

Foundation

A geo holdout experiment is more reliable than a platform click model. A well-structured MMM is more trustworthy than GA4 last-click attribution. Getting to a real answer starts with acknowledging that some data deserves more weight than others. Most tools treat every source the same. We do not.

Evidence TiersMethodology AssessmentSource WeightingCausal vs Correlational
02

Conflicts need to be surfaced, not averaged

Principle

When Meta reports 4.0× ROAS and your MMM reports 2.0×, averaging them gives you 3.0×. That is not a reconciliation. That is hiding the problem. The real work is understanding why those two sources disagree and which one has the stronger methodological foundation.

Conflict DetectionDelta AnalysisSource Agreement MapDispute Flags
03

Confidence matters as much as the number

Approach

A ROAS figure without a confidence range is precision theater. We are building a Measurement Confidence Score that tells you not just what the answer is, but how certain you should be about it. A confident 2.4× is far more useful than an uncertain 4.0×.

Measurement Confidence ScoreConfidence BandsUncertainty RangeRecency Weighting
04

The goal is a decision, not a chart

Vision

Marketing leaders do not need more dashboards. They need to know what to do next. Should you scale this channel? Cut that one? Run a holdout before committing? That is what we are building toward. Measurement that ends with a clear recommendation, not another tab to interpret.

Recommended ActionsScale / Cut / TestExperiment DesignDecision Rationale

Not a chart.
A decision.

Every channel gets a true iROAS, a confidence score, and a clear action. Scale. Cut. Test. No interpretation required.

38%

Avg. platform overreport

2.6×

True blended iROAS

$24K

Budget to reallocate

ChannelPlatform ROASTrue iROASMCSFlagRecommendation

Meta (Facebook + Instagram)

4.0×
1.9×
62
DISPUTED
Reduce spend by 25%

Google Search (Brand)

6.2×
5.1×
84
CONFIRMED
Scale +15% — strong geo holdout evidence

Google Search (Non-brand)

3.5×
2.7×
71
PARTIAL
Hold. Run holdout test before scaling.

YouTube / Display

2.1×
0.9×
38
DISPUTED
Cut. Evidence does not support current spend.

What evidence would change this?

Every recommendation includes the experiment brief needed to upgrade confidence. LensOS tells you exactly what data would move the MCS from MODERATE to HIGH.

See a live example

Every existing tool solves half the problem.

Dashboards show you data. MTA models guess at causality. MMM gets you closer, but takes months. None of them reconcile conflicts or tell you what to do.

Our product

MeasureLens

Dashboards

MTA Tools

MMM

Reconciles conflicting sources
Partial
Confidence scoring per channel
Decision recommendation (scale / cut / test)
Incrementality-aware
Partial
Works without a data science team
Time to first insight
< 1 hr
Real-time
Days
4–8 weeks
Flags attribution overlap
Partial
Explains source disagreements

Dashboards

Show data, don't interpret it

Looker, Tableau, and GA4 are great at surfacing numbers. They're not built to tell you which number is right, why sources conflict, or what to do next.

MTA Tools

Fragile models, no causal grounding

Multi-touch attribution redistributes credit across touchpoints — but it's still based on observed correlation. It doesn't measure what would have happened without the ad.

MMM

Powerful but slow and expensive

Marketing Mix Models are the gold standard for causality, but a full engagement takes 6–12 weeks, costs $50K+, and goes stale quickly. Not viable for monthly decisions.

Built for teams who've outgrown dashboards.

Spending $10K–$500K/month on ads and making decisions without confidence. You know your numbers are wrong — you just don't know by how much.

DTC brand, $50K–$200K/month ad spend

Growth Marketer

Stuck choosing between Meta's ROAS report and the GA4 numbers that don't match. Making budget decisions by gut because nothing agrees.

  • Unified ROAS per channel
  • Clear scale / cut guidance
  • No analytics team needed

Series A–C startup, $200K–$500K/month

VP / Director of Marketing

CFO wants real attribution numbers before approving Q3 budget increase. Can't go to the board with four conflicting data sources.

  • Board-ready confidence scores
  • Source disagreement breakdown
  • Defensible budget rationale

Agency or in-house, $10K–$100K/month

Performance Lead / Media Buyer

Clients ask why Google says 3.5× and Meta says 4.0×. The answer 'attribution window differences' isn't good enough anymore.

  • Per-channel true iROAS
  • Client-ready reports
  • Incrementality test briefs

MeasureLens is probably right for you if you've ever said or done any of these:

You've ever said 'it depends which platform you look at'

You've averaged ROAS numbers to get a 'blended' view

You cut a channel and couldn't tell if revenue actually dropped

You scaled Meta because the dashboard said 4× but CAC didn't improve

You've wanted an MMM but couldn't justify the cost or wait time

Your data team is too busy to run incrementality tests

Early Access Open

This problem is
worth solving.

We are building MeasureLens for marketing leaders who are tired of staring at conflicting numbers and not knowing which one to trust.

The product is in active development. We are looking for early conversations with performance marketers, heads of growth, and marketing executives who feel this problem firsthand and want to shape what we build.

No sales pitch. No demo environment. Just an honest conversation about the problem.

Designed to work with the data your team already has

Meta AdsGoogle AdsGA4MMM OutputMTA FeedsIncrementality Experiments