Sean de Hoon Head of Enablement, Annalect Netherlands, and Nicolas Arrive, Measurement Lead at Meta, discuss maintaining rigour in measuring advertising effectiveness while moving on from using cookies and device IDs, and the importance of experiments.

Initially planned in 2022, Google announced mid-2021 delays to the deprecation of cookies. 2023 felt far away. Advertisers went about their business focusing on 1-to-1 targeting and measurement. Google’s latest announcement on Topics as the alternative to cookies is a stark reminder that marketers need to adapt and adapt fast. The ad ecosystem is going through a major transformation. This transformation is driven by government oversight, operating system restrictions like iOS’s ITP and the aforementioned cookie deprecation, which all means that long-standing advertising mechanisms will change.

Legacy attribution techniques

Legacy attribution solutions can no longer rely on cookies and device IDs to stitch the entire digital consumer journey together. There were holes before, but now the gaps on these consumers' journeys are widening: cookies get depreciated, privacy focus regulations are empowering consumers to control their own data.

Even before these changes, attribution techniques could not account for complex, non-linear advertising effects. It is difficult to untangle true paid media effects from baseline consumer behaviour. Correlation (purchase follows Ad Exposure) does not equal causation (purchase happens because of Ad Exposure). How many people targeted by a campaign would have purchased regardless of the ad? Attribution cannot answer this question, but experiments can. Experiments are based on randomised controlled trials and require a control group. The control is often a holdout group that is not exposed to ads. “Advertisers leave value on the table by underusing experiments to understand the incremental value of marketing”, claimed Julian Runge in HBR in 2020. True. However, not all clients can run experiments on all channels at any point in time.

The future of measurement at Annalect Netherlands is Marketing Mix Modelling (MMM), examining how a large number of variables impact sales. The goal is to understand which marketing channel or practice drives incremental value. Data is aggregated and the solution is privacy safe, but it used to be long and costly. MMM is no longer a technique reserved to large, established teams within CPGs and auto brands. Cloud-based technologies, machine learning techniques and open source code like Meta's Project Robyn make MMM more widely available and faster than before. Still, experimentation cannot be beat when it comes to showing causality. So how do we incorporate that strength?

We partnered up with Meta and developed state-of-the-art automated geo-testing capabilities. In 2019, we used the Meta Marketing API to build a geo-test solution switching on and off Meta advertising for McDonald’s. We used the aggregated output to validate our MMM model, grounding the contribution to Meta’s advertising into the causal contribution identified in the experiment. More recently, we have used geo-lift experiments as a prior in Bayesian MMM in a Meta-funded analytic research paper. This is considered an advanced form of calibration with MMM, as information is not just used to validate but to adjust the results.

Calibration can also be used for multi-touch attribution, as we recently showed for Renault. We worked with Google’s Ads Data Hub (ADH). ADH enables customised analysis like multi-touch attribution while respecting user privacy, but is limited to Google’s walled garden. Renault's digital activity was subject to experiments across Meta and Google. A calibration model was built by taking the model built in ADH and adjusting the results to anchor the business outcome to these experiments. The output of these experiments is used as the ground truth and we adjusted the legacy model to reflect the contribution of each channel (see here).

Where do we go from here?

MMM is developing into a more agile method that can help marketers on an ongoing basis. Experiments can play two roles in MMM going forward. They can become a yardstick to show the approximate impact a certain channel or tactic has. This yardstick would be relatively static and may be refreshed at certain intervals. Experiments could also be used more always-on, new results constantly fed into the modelling and visualisations. This would require infrastructure changes on the part of Meta, Google and other publishers to programmatically run and report on experiments. Our bet is that both roles will be important moving forward, depending on the way that advertisers choose to run their MMM analyses and experiments.