⚙ MACHINE MODE. Plain-text semantic view for AI crawlers and screen readers.
Senior marketers only Fixed fee · No spend markup No junior account handling
Cover illustration for an article on marketing attribution, the blind spots in every model, and how to triangulate
Performance 27 April 2026

What Is Marketing Attribution — and Why Is Everyone Getting It Wrong

27 April 2026

Marketing attribution assigns credit for a conversion to the marketing touchpoints that preceded it. The goal is to understand which channels and activities drive commercial outcomes — and where budget should go as a result.

Most teams get it wrong not because they chose the wrong model, but because they treat attribution as a measurement of what advertising is causing when it is actually a measurement of what advertising is touching. Gartner found fewer than one in three CMOs are confident their attribution is accurate (Gartner, 2025: https://www.gartner.com/). The confidence problem is a model problem: the models are doing exactly what they were designed to do, but what they were designed to do is not what teams think they are measuring.

The Most Common Marketing Attribution Models

Six attribution models are in regular use. Each assigns credit differently and each produces a different picture of which channels appear to be working.

Last-click assigns 100% of credit to the final touchpoint before conversion. Simple, clean, and systematically misleading: it ignores everything that shaped the decision before the final moment.

First-click assigns 100% of credit to the first touchpoint. Equally distorting in the opposite direction: it ignores everything that closed the decision.

Linear distributes equal credit across all tracked touchpoints. More inclusive, but statistically naive. Not all touchpoints contribute equally; assuming they do smooths over the information that matters.

Time-decay gives more credit to touchpoints closer to conversion. In practice, produces results similar to last-click, since the touchpoints with the heaviest decay-weighting are the same bottom-funnel channels last-click already over-credits.

Position-based assigns 40% each to the first and last touchpoints, with 20% distributed across the middle. Acknowledges that initiation and closing both matter, but the 40/40/20 split is an assumption, not a measurement.

Data-driven uses machine learning to distribute credit based on historical conversion path data. More sophisticated in design and opaque in execution — and typically built by the platforms being measured, which creates structural incentives to favour their own inventory in credit distribution.

Why Last-Click Attribution Systematically Misleads Budget Decisions

Last-click creates a specific and predictable distortion: it redirects budget toward the channels that appear at the end of the buying journey, regardless of whether those channels caused the journey to happen.

Paid search typically receives the most credit. A buyer who saw a brand video three months ago, engaged with a social post two weeks ago, and then searched for the brand name and clicked a paid search ad is recorded as a paid search conversion. The three months of brand work that preceded the search is invisible to the model.

Over time, last-click drives budget away from brand and upper-funnel channels and toward paid search and retargeting. Conversion volume from those bottom-funnel channels holds initially, because the brand investment generating warm audiences is still producing its effects in the short term. Months later, the warm audience pool thins. Performance deteriorates. The connection to the earlier budget decision is no longer visible in the data. By the time the damage registers in revenue, the attribution model has already moved on to crediting whatever the new mix looks like.

How Multi-Touch Attribution Works — and Where It Falls Short

Multi-touch attribution models distribute credit across multiple touchpoints rather than assigning it all to one. More accurate than last-click in principle. Three persistent limitations prevent it from being accurate in practice.

First, MTA models only credit touchpoints they can observe. Any brand impression served without a click — a video view, a display ad, a podcast mention — is invisible to most MTA implementations. The exposure happened; the model did not track it; it receives no credit. The channels most likely to generate awareness and build brand associations are disproportionately affected by this gap.

Second, the credit distribution in rule-based MTA is still assumed rather than evidenced. Assigning 20% to each of five touchpoints implies equal contribution. It does not measure it. The assumption may be more defensible than 100% to the last touchpoint, but it is still an assumption.

Third, privacy restrictions are progressively reducing trackable signal. With third-party cookie deprecation, iOS privacy changes, and expanding consent requirements, the observable portion of the customer journey is shrinking. MTA models built on increasingly incomplete data are producing increasingly unreliable outputs, even as teams continue to report on them with the same confidence.

What Is Data-Driven Attribution and Is It More Reliable

Data-driven attribution uses machine learning to analyse historical conversion paths and assign probabilistic credit to each touchpoint based on its statistical association with conversion outcomes. It is more accurate than rule-based models when three conditions are met: sufficient conversion volume, tracked data that accurately represents the full customer journey, and regular model updates as channel mix and buyer behaviour evolve.

None of these conditions holds universally. Most businesses have insufficient volume in some channels for the model to produce reliable estimates. Privacy restrictions mean the tracked journey is incomplete, and the incompleteness is systematic rather than random, which biases the model rather than just adding noise. And many businesses using platform-native data-driven attribution are using models built by the platforms being measured — which creates an incentive structure that is at minimum a conflict of interest and at worst a systematic bias in how credit is allocated.

How a Growth-Stage Brand Should Approach Attribution Without Enterprise Tools

Triangulation. No single model produces the correct answer. A combination of approaches produces a better one.

Use platform attribution for within-channel optimisation. It is good enough for informing bid strategy and creative decisions inside a single channel. Do not use it to make cross-channel allocation decisions; that is where its limitations cause the most damage.

Use revenue data as the control variable. If platform-reported ROAS and conversion volume do not correlate with actual revenue movements over the same period, something in the attribution chain is wrong. This mismatch — healthy platform metrics alongside flat or declining revenue — is one of the clearest signals that attribution has drifted from commercial reality.

For major channel decisions, use a short-term budget pause test. Pausing a channel for four to six weeks and observing the impact on total revenue provides directional evidence of incrementality that no attribution model delivers. Imperfect, yes — pauses affect buying behaviour in ways that make controlled experiments difficult. But directionally, it produces more actionable information than any rule-based model used in isolation.

What Good Attribution Looks Like When Perfect Data Does Not Exist

Good attribution is not perfect attribution. It is attribution that produces consistently better budget decisions than the alternative.

The standard to aim for: attribution models that are transparent about their assumptions, regularly calibrated against actual revenue outcomes, and supplemented by periodic incrementality testing for the channels carrying the most strategic weight. Attribution is a planning tool. Its value is directional — helping teams understand approximately where budget is producing outcomes and where it is not.

The mistake to avoid: treating any attribution model's output as ground truth. The moment teams stop interrogating the model and start defending its outputs, attribution has stopped being useful and started justifying whatever the current budget allocation happens to be. At that point, the model is not informing decisions. It is providing rationalisation for them.

The Attribution Problem That Cannot Be Solved with a Better Model

This article opened with the distinction between measuring what advertising is touching and measuring what advertising is causing. Every attribution model discussed here measures touching. Causality requires a different approach: holdout experiments, geo-based lift testing, and media mix modelling that accounts for baseline organic conversion rates.

Most teams do not run these experiments, either because they lack the traffic volume to produce statistically significant results or because the commitment to pausing spend for testing is difficult to justify internally. The result is an industry that reports attribution data with high confidence and low accuracy — which may be worse than reporting with acknowledged uncertainty, because confident but inaccurate signals produce confident but wrong decisions.

The right posture toward attribution is informed scepticism: use the available models, understand what each one cannot see, and treat the outputs as directional inputs to decisions rather than definitive answers. The teams that compound performance over time are not the ones with the most sophisticated attribution setup. They are the ones most honest about what their measurement can and cannot tell them.

If you are questioning whether your current attribution setup is producing reliable signals, or want help building a measurement approach that calibrates platform data against revenue reality, Kaliber works with brands on this. Reach out at kaliber.asia/contact.

Frequently Asked Questions

What is marketing attribution?

Marketing attribution is the process of assigning credit for a conversion to the marketing touchpoints that preceded it. Attribution models range from simple, last-click assigns all credit to the final interaction, to complex, data-driven models use machine learning to distribute credit probabilistically. All models are tools for understanding approximately where marketing is producing outcomes, not instruments for measuring precise causality.

What is the best marketing attribution model?

There is no universally best model. Last-click is operationally simple but systematically over-credits bottom-funnel channels. Data-driven is more sophisticated but requires sufficient data volume and is built by platforms with conflicting incentives. For most businesses, the most reliable approach is triangulation: using platform attribution for within-channel decisions, revenue data as a calibration check, and periodic holdout testing for major channel decisions. No model should be treated as ground truth.

Why does last-click attribution cause problems?

Last-click attributes 100% of conversion credit to the final touchpoint, systematically ignoring all earlier brand interactions that built the awareness and intent the final channel harvested. Over time, this drives budget toward paid search and retargeting and away from the brand activity generating warm audiences. The performance of those bottom-funnel channels holds until the warm audience pool depletes, at which point attribution has already obscured the connection between the earlier budget decision and the current performance decline.

What is incremental attribution and why does it matter?

Incremental attribution attempts to measure which conversions were caused by advertising, rather than which conversions happened to be touched by advertising. The distinction is critical: standard attribution counts all conversions in a channel's attribution window, including those that would have occurred organically. Incremental measurement, derived from holdout experiments or geo-based lift tests, isolates the advertising effect. The gap between attributed and incremental conversions is frequently large, suggesting that a significant portion of attributed conversions would have occurred without the ad.

How do privacy changes affect marketing attribution?

Privacy restrictions, including iOS consent changes, third-party cookie deprecation, and expanding consent requirements across markets, are progressively reducing the trackable signal that multi-touch attribution models depend on. The observable portion of the customer journey is shrinking, and the reduction is systematic rather than random: brand and upper-funnel touchpoints, which often generate impressions without clicks, are disproportionately invisible to tracking. Attribution models built on increasingly incomplete data produce increasingly unreliable outputs, even when confidence in those outputs remains unchanged.

Chat on WhatsApp