More and more providers are offering automated market mix modelling (MMM) systems using AI/machine learning as cookie-based attribution declines - here, 17 experts have published an open letter warning that many such systems are not nuanced enough to provide effective measurement.

Dear friends, 

We evaluation experts care deeply about you getting the best possible response from your advertising. 

We’re in the MMM/econometrics business because we love crunching the numbers and solving a puzzle. But we’re also here because we want to see change in the real world, in the form of growth in your businesses. 

It’s in this spirit that we’re writing to you. To warn you. Because not all analysis that looks solid from the outside is equally good. 

The thing is, your businesses are unique and multi-faceted organisations, operating imperfectly in contexts no other firm has experienced before. And good evaluation that can genuinely untangle sales driven by ads, has to account for that complexity properly. 

Equally, and we hope you don’t mind us saying this, even though you are brilliant at a lot of things, you aren’t always that good with data. You sometimes record it in messy ways and you don’t always know for sure what each number means or how well it’s measured. 

All this means there is no single approach to evaluation that works in every circumstance. The ways that your world can be different or your data can be mucky or missing are too numerous and curious and complex to handle with “if this then that” code. 

Even AI can’t identify what isn’t in the dataset it’s looking at. And it can’t talk to your team about the time you misprinted the barcodes and then innovate a way to use your data to capture the effect. Or find out that you ran a large radio campaign independently and didn’t tell anyone.

Don’t get us wrong. We love a bit of code as much as the next person, and probably more. We all use it to automate data collection and prep, and we automate producing standard outputs too. 

But there are some bits that can’t be automated. Making sure models really reflect your business, getting the nuances of what happened in the past straight, and working with your people to get findings about big expenditures acted on are all things that require real life human beings. 

We’re sorry we didn’t flag this with you enough when last-click attribution was new. It wasn’t because we didn’t care, it was that every time we tried to say, we were accused of being dinosaurs or luddites. We’re gutted about how much you spent serving ads to people who were already on their way to you. 

So that’s why we wanted to write to you now. Because there’s danger on the horizon again. 

With cookies disappearing, platform-based, automated versions of MMM are coming to market to solve the problems with attribution. 

We’ve looked at the algorithms and we have to tell you, they’re much simpler than they need to be. With MMM, every time you don’t include something that matters you get a wrong number for the effect of advertising, and there are models out there that don’t even include COVID-19 or price. 

We keep getting people that have invested in these platforms coming to us saying that the numbers make no sense, that no-one believes them, that the 9-12 months it took to get set up were wasted. 

Now, we’re not that vocal a bunch usually, and we don’t often offer advice unless we’re asked for it. But if you want to know what we suggest you do, it’s shop for MMM like you’re buying a new kitchen. Ask around for recommendations, get at least 3 quotes, and don’t be too trusting. 

Ask these questions and look to see if the person you’re talking to squirms before answering yes:

  • Does the model include factors like price, economy, seasonality, covid? Will you report on how these things affect our business?
  • Does the model cover at least 2 years of data, preferably 3? 
  • Do you measure how upper funnel ads like TV & YouTube affect outcomes in lower funnel ads like PPC? 
  • Will you share advertising response curves with our media planners?
  • What would happen if results came back and we didn’t believe them, say because they didn’t line up with something else we knew?
  • Will you be able to explain the model to our Finance people? Will your numbers line up with theirs? 
  • Could our analysts who understand regression look under the bonnet at the model and all the tests and statistical due diligence?
  • Can you demonstrate to them that your models are good at forecasting?

We’ll be rooting for you. 

Signed: 

Les Binet (Adam & Eve DDB)

Grace Kite (Magic Numbers)

Louise Cook (Holmes and Cook)

Mike Cross (Measure Monks)

Andrew Deykin (D2D)

Matt Andrew (Ekimetrics)

Neil Charles (ITV)

Sarah Stallwood (Magic Numbers)

Sara Jones (Pearl Metrics)

Jamie Gascoigne (Measure Monks)

Joy Talbot (Magic Numbers)

Simeon Duckworth (UCL)

Sally Dickerson (Benchmarketing)

Stuart Heppenstall (D2D)

Dominic Charles (Wavemaker)

Steve Hilton (Measure Monks)

Tim Fisher (Measure Monks)