Les Binet and Sarah Carter get a little bit angry about some of the nonsense they hear around them… like the idea that rigour is always good.

We were recently discussing a pitch. Looking back on it, the team were frustrated at where it went wrong. The strategy had been agreed, the brief had gone into creative teams, and everyone thought their idea would work brilliantly.

But at the last minute, someone did some quick calculations. The strategy was to increase frequency of purchase among existing users. But the maths revealed that to meet the sales targets, users would need to double their weekly consumption. It was clear the campaign could never deliver such a huge effect, and it was too late to come up with anything else.

Grrr… While the team were busy focusing on honing the nuances of the creative idea, they'd failed to look at the numbers that mattered. We call this 'Misplaced Rigour Syndrome', and it comes in two forms: too much rigour where you don't need it; and not enough where you do.

There are times when we do need rigour – and lots of it. Long before you brief the creatives, it's vitally important to know whether the campaign is capable of working. How many people does it need to reach? What behaviour does it need to change and by how much? And how much profit will that deliver? The IPA databank shows that setting these clear business objectives can quadruple effectiveness. But all too often this due diligence is left to the last minute, as it was in that pitch, by which time it's too late to change course.

Rigour is vital at the back end too, after the campaign has run. Did the campaign work? How big were the effects? Was it profitable? The IPA data shows that clients and agencies who regularly evaluate in hard business terms achieve much better results, because they learn what really works.

Unfortunately, rigorous post-campaign evaluation is too rare. Getting hold of sales data and analysing it properly is too much like hard work (unless there's an Effectiveness Award to be won). Easier to look at a few proxy measures – Facebook likes or YouTube hits – and move on.

The problem with this casual approach is, without proper evidence-based feedback, clients and agencies can lose their instinct for effectiveness. It is not unknown for a campaign to be lauded as a shining example of cutting-edge thinking, only for sales data to reveal it was an expensive turkey.

But if there is too little rigour at the start and the end of the advertising process, there is generally too much so-called rigour in the middle; so, instead of looking at the sales data, we find over-intellectualised analysis of propositions, message rankings and brand hierarchies.

And clients love intermediate measures of all sorts, even if they have little or no correlation with business success, because they offer the false promise of easy decision-making. For instance, we recently sat through a 60-chart quantitative pre-test debrief. Even with two charts of analysis for every second of the ad, we were no wiser as to how the ad would perform in the real world. Good old-fashioned intuition, honed by experience, would have been a much more reliable guide, but that would have required us all to use some judgement.

So what do we recommend? Well, we'd advocate a much freer approach to the creative process, counter-balanced by a more rigorous approach to objective setting and evaluation. And interestingly, that's what the smarter marketing companies seem to be moving towards, especially in the digital space where it's often cheaper to experiment.

As Booz & Co. put it: "Be creative and measure what happens. If it works, do more of it. If it doesn't work, go back and be creative again."

Try it. You might be surprised.


This article originally appeared in the September 2012 issue of Admap. Click here for subscription information.