Moët Hennessy’s André van Loon explores the side of measurement that too often goes ignored: what we don’t know.

In marketing, why do we measure? Are we engaged in justifying money spent, strategic choices made; or in creating models and guardrails for the next effort? Do we turn to the data for validation, or for foresight? Do we take its conclusions as one-time or as predictive facts? 

What is the point of the measurement task?

Now, I won’t take you to the end of the article to say what might be obvious: it depends. The marketing world is not so easy, nor measurement so assured, that the either/or dichotomy can hold sway for long. Sometimes it’s either/or; sometimes it’s this/and that. 

Not everything that can be justified (good use of media budget; strong audience targeting; solid conversion rates/sales) can necessarily be codified; nor is every predictive model successful without regular review.

Both validation and prediction rely upon established facts. And it is here that I think a much larger question comes in - and one that is often ignored. 

When we measure and evaluate effectiveness, we address ourselves to the data. We witness movements over time, spikes and troughs; compare media flighting or other planned activity (or seasonality, competitor activity, market events etc.) with observable effects. 

Whether you turn to a causal, correlation, or any other mapping method is a follow-up question; the first point is that there is at least something to be analysed. 

But what about the things we know that we don’t know?

It is my contention that when you analyse the available data, and make an effectiveness case from them, you largely ignore the things that did not happen. Not only do we not tend to analyse what is not there in effectiveness studies (in the sense of being ‘given’, observable), we mostly don’t even ask ourselves why certain aspects of marketing reality are, and others are not. And I believe we could learn an awful lot if we started doing so.

How far does the tide rise?

One of my favourite insights from existing creative effectiveness awards relates to the series of John Lewis Christmas campaigns. 

A vital observation from this work is: ‘when it works well, it all works well.’ More consumers are aware; more of these like, love and/or remember the campaign and the brand; there are larger cross-media effects, driven by emotion and fame; there is more media coverage and social media conversation; there are more searches for the nearest John Lewis store or for online shopping; consumers even search for the music used in the marketing and the songs trend on Spotify and so on.

A rising tide lifting all (effectiveness proof) boats. I buy into that, and I think the John Lewis case studies are required reading.

And yet. 

John Lewis at Christmas were national campaigns, essentially addressed to tens of millions of consumers. The campaigns were enormously effective, but for all the consumers who engaged and ‘did’ something (remembering, liking, loving, searching, talking, sharing, shopping, buying etc.), many more of them did nothing at all

The point, of course, holds for smaller, more narrowly targeted marketing. Even the best marketing campaigns do not result in universally observable effects.

Understanding inertia

I find this a fascinating area for effectiveness measurement. It is fascinating to try to understand why X% of an audience responded, and Y% did not. Was it indifference? Lack of visibility or attention? Lack of emotional or rational connection? Unknown brand associations? Something else?

There are many efforts in marketing study to understand what kind of things are necessary to shift consumers, for them to react in some observable way that can then be measured, evaluated. 

But within the specific sphere of effectiveness case studies, just as effectiveness measurement at agencies or in-house, there is little evidence of a turn from the proof of what happened, to an understanding and then additional research into what did not happen. 

Success is celebrated; silence is ignored

And so I would point out that, whether you want to justify what happened by established facts, or use such facts to build predictive models, there needs to be a stronger awareness that these facts - successful though they may be - are but a small part of the picture. 

A campaign could have been more successful than before, better than competitors’, and yet a mere shadow of what it could have been. There is naturally more potential and more that could have been done; no campaign targeting all category buyers, for example, will ever actually gain, or even be known by all category buyers.

Relevant effectiveness questions tend not to be asked, as long as the campaign or activity exceeded expectations, or as long as the predictive model works. 

How could such questions be addressed? 

The problem, I think, lies in the acceptance of ‘given’ data (observable shifts in behaviour etc.) as all that it takes to start answering campaign effectiveness questions. If you take that as a problem, then solutions could start to be formulated around the ‘silent’ spaces. 

Within the task of campaign effectiveness measurement, here are a few concrete ways to hunt for answers:

  • Qualitative research into indifferent, unresponsive audiences; 
  • Sizing of the missed opportunities; 
  • Evaluation of types of attention (split by audience type, and certainly by media channel/platform); 
  • Digital tracing of what consumers did do instead at the relevant times; 
  • A critical review of brand positioning (are you growing your success amongst tried-and-trusted audiences, but missing a trick with new audiences?)

To be clear, I am not talking about these tasks as general efforts to understand marketing, but as specific ways to measure a marketing campaign at hand. What I’m driving at is that even the most successful campaigns leave questions unanswered, in the way we typically talk about them: description of objectives, strategy, creative, media execution, in-flight performance, post-campaign/activity evaluation, results achieved. 

If instead, we do that as usual, and then go on to proactively probe the areas the campaign/activity did not resonate - and why this was the case - we stand a chance of learning much more than usual.

In short: additional analysis into ‘silent spaces’ (the parts the marketing apparently did not reach or move) can start to give the established facts the provocative richness of what could have been better - and could so be the next time. 

That is at least something we should talk about, talking about measurement.