In a recent column, we had a bit of a rant about the way that the sloppy use of language stopped marketing and advertising people from thinking clearly. This month it's the turn of bad numbers. And more particularly, the abuse of numbers by quoting them out of context. For instance, a recent case study made a great deal of the fact that the ad got 1.6 million views on YouTube.
Grrrr... figures like that make our blood boil. It sounds big and impressive, but is it really? Faced with large numbers like this, it's nearly always a good idea to re-express them as percentages. A quick calculation suggests that those 1.6 million views represent a mere 3% of the target audience. And the true percentage is probably lower, because some viewers will be outside the target audience, some will live abroad, and some will view more than once.
Another useful sanity check is to compare numbers against costs. Flicking through a review of some digital case studies, we were struck by the fact that, while all of them flaunted big numbers for YouTube views, Facebook friends and Tweets, not one of them talked about costs. Yet a quick calculation can be very revealing. For instance, in one case, it seems the client was paying roughly £120 per exposure.
Thinking about space and time can help too. Global companies are fond of quoting numbers that sound huge until you put them into their geographical context – Coke's 35 million Facebook fans represent less than 1% of their user base, for instance. Similarly, the fact that the Cadbury's Gorilla ad has been viewed more than six million times online over the last five years sounds less impressive when you realise that the first TV exposure achieved more than that in a few seconds.
Results should always be compared against objectives, but this is another piece of context that is usually missing. IPA research suggests that few campaigns set clear behavioural objectives at all, let alone specify numerical targets.
Historical context is fast becoming impossible to look at.
Ironically, in the pre-digital days of sales data coming in on vast scrolls of paper, we seemed to be more able and interested in putting current sales in historical context. Now it is three years' sales data if you're lucky. And with rapid personnel changes on marketing and agency sides, no-one is able to learn from the past.
Economists like to put numbers in context by thinking about 'opportunity cost'. What would the numbers have looked like if you'd spent the same money in a different way?
This question forces you to find a common currency for evaluation, which can be very useful. For instance, online reach is measured in terms of views, offline in terms of GRPs. This makes the numbers for online look much bigger, and makes comparisons difficult.
Finally, when presented with that big fat number, try to find out how much variation it conceals. How accurate is it? Could it be a statistical blip? As we discussed in a previous column, confidence intervals and statistical significance tests are woefully neglected.
And there is a more subtle point. Many of the numbers that matter in marketing are distributed unevenly. In every category, there tends to be a small core of very heavy users and a much bigger 'long tail' of light users. Averages can be very misleading in such circumstances, so it is usually better to go beyond the headline figure and examine how the numbers are distributed across the population.
So, next time someone tries to wow you with that One Big Number, don't be too impressed.
Emotionally, we're all programmed to find big numbers impressive and intimidating, but most numbers are meaningless until they are placed in their proper context. When someone quotes a large number on its own, it's a fair bet that they're bullshitting.