Too often, effectiveness projects are invalidated when held to be imperfect: when the actual loses out to the ideal state, argues Moet Hennessy’s Andre van Loon. 

There’s a widespread misconception about what’s needed to make marketing effectiveness work, and work well. Let’s call it the ‘fallacy of perfect proof’. It’s likely you will have witnessed examples of it.

Any time a client or colleague says things like: “one of your datapoints is wrong, so I don’t know what to believe anymore” (all data is invalid); “the data isn’t clean enough, so we can’t carry on” (nothing can be done); or “we don’t have enough historical data to prove what we want” (nothing can be done at all), I would contend that this fallacy is, at least partly, at play.

This aspect relates to data inputs.

A second aspect relates to interpretation outputs: the delivery of insights and recommendations. You might hear a recipient say: “well, last time we met you said we should do XYZ, and now you say we should do ABC; they can’t both be right” (the interpretation is too changeable); or “if we go by the data, what you recommend isn’t obvious” (the interpretation is too subjective.)

I think that both aspects have it in them to damage the full potential of marketing effectiveness.

The enemy of the good

The fallacy of perfect proof occurs when there’s a belief that 100% accuracy (of inputs and/or outputs) is both achievable and desirable.

Each actual effectiveness project is weighed up against its ideal state: how it could be, if only we had clean, fully accessible data inputs and consistent interpretation outputs. 

Such a comparison between the actual and ideal state might not always lead to a project’s termination, but it can create doubts about its validity, or to a sense of frustration at its perceived messiness.

So the numbers may be held to be tell partial truths, or the recommendations based on them to be unrealistic.

To give you an example of data input concern: in my past experience, the absence of an article from the Financial Times, featuring the client’s CEO, cast doubt on a PR measurement programme consisting of thousands news pieces. The absence was an error (a bad one), and yet it shouldn’t have undermined the entire endeavour. But the client thought that one thing wrong meant that all things could be wrong. The result was that most future discussions were about how accurate the news monitoring was, rather than the strategic insights we were delivering (I am not defending errors, but I am questioning the leap from one error to a questioning of everything.)

Partial interpretation

Turning to supposedly partial interpretation outputs, I imagine these are easy for you to call to mind. Suffice to say that in my experience, complaints can happen when budgets or strategies point one way, and the interpretation of past effectiveness another.

My concern is that too often, effectiveness projects are invalidated when held to be imperfect: when the actual loses out to the ideal state.

But isn’t it laudable to aim for perfection? 

It depends on what you mean. It’s good to have clean, accessible data; and rigorous interpretation. I think we can all agree on that. But I wouldn’t agree that imperfect data inputs and partial interpretation outputs are necessarily deplorable. 

It’s too hardcore to render everything short of perfection as faulty.

If you spend more time talking about what you don’t have, rather than about what can be done with what you do, I would say that your focus is misplaced.

Living with imperfection

Marketing effectiveness projects should take account of their imperfections, accommodate themselves to a workable approach, then thrive on their partial bases and outputs.

Regarding inputs, it’s one thing to correct errors, another to believe that data can be fully complete or manipulable. Media spend data may only be available at a monthly level (rather than weekly or daily); consumer insight data may only be ready for this market but not another; competitive benchmarking may only be provided for the last two, not three or four years.

In some cases, this could lead to a halt to the effectiveness project, and that could be the right call. But at other times, it will be possible to draw up an approach that says: we can go ahead with 80% clean data (or 90%, or 75%, or x% - depending on your threshold.)

Or the discussion can turn to: if we don’t have this data, what else can we look at? Can we fill the gap with something else? What can be established with what we’ve got, even if that is different from the original measurement brief?

A question of interpretation

Regarding interpretation outputs, I give most credence to evidence-based arguments, but I also think that these should be allowed to incorporate speculation. “We can’t quite show it with data as such, but we strongly believe that XYZ could be a potential future outcome, because that’s what our interpretations have led us to.” 

An effectiveness professional can’t just say whatever they want, trot out favoured assumptions. Facts and analysis have a central role, rightly so. Yet a good analyst will go where the proof takes them, and beyond that if they can – not shying away from saying the unexpected.

This view, positioning good interpretation as a mix of evidence and speculation will not be welcome to everyone. But I think that recommendations that come from human analysis of often partial data aren’t inherently wrong, to be perfected if only we had a more mechanistic way of doing things. 

The reason I say that is because we are marketing to subjective individuals. The effectiveness professional speculates based on an informed judgement of what others might think, feel and do. In short, it takes a human being to anticipate another human being.

Underneath this is an apparently simple, but actually complex decision. Do you decide that your audience can be captured by cross-contextual rules and principles (more data, more precision – less speculative judgement)? Or do you think that the best way to anticipate others is by mixing facts with informed guesswork?

No prizes for guessing what I think – but then, I’m just giving you my opinion.