From the response to my last post it would appear that not all of the market research community is convinced of the need to improve pre-testing techniques – evidently perfection already exists.

So before I share some other intriguing papers on new pre-testing approaches with you, I will revisit the IPA dataBank evidence on the need for a revolution in pre-testing techniques.

It’s not true to say that we don’t have the foggiest idea whether campaigns were pre-tested or not: in recent years all case study authors are asked if the campaign was pre-tested at all (whether or not they chose to “pad out a weak paper” with pre-test results). The proportion of these campaigns that were pre-tested is very close to the proportion for earlier campaigns derived from whether pre-testing was referred to in the paper. So unless there has been a dramatic decline in pre-testing in recent years, it appears that there aren’t legions of strong, highly profitable case studies that failed to mention the pre-testing results and so might have skewed the findings against pre-testing. Why would they omit persuasive evidence unless the campaign had failed the pre-test? Although we know this has happened, it can only be rarely, since few campaigns that fail pre-tests get the chance to prove themselves in the marketplace.
 
Nevertheless let’s examine the 124 recent case studies where we can be certain of whether or not the campaign was pre-tested. The judgement of pre-testing is not as harsh as before: we find that non pre-tested campaigns are around 1.6 times as likely to achieve top-box profit growth as pre-tested campaigns (21% vs 13%). But things are more complex than that, because what the IPA data actually suggests is that in contrast to pre-testing, tracking is very good for effectiveness. 
  
lf1.png
 
And so it appears that the majority of pre-tested IPA campaigns that achieved good profit growth did so because they benefited from tracking rather than pre-testing.
 
I’m not suggesting that the IPA data is perfect – there are a number of circumstantial factors that could affect the observed success rate of pre-testing. But even when you take account of these you still come back to the same stark conclusion that in general pre-testing appears to reduce your chances of achieving top box profitability growth. So those with an interest in improving pre-testing might want to read some of Dr Alastair Goode’s papers on implicit vs. explicit communication and how this affects pre-testing. In his thought-provoking 2007 IJMR article, Goode observes that “emotional responses occur completely independently of conscious recall” and that therefore “it is completely possible for an ad to increase a person's emotional response to itself or the brand it contains without them in any way being consciously aware that they have seen the ad.” The implications being not only that recall-based metrics are unreliable assessments of ads intended to alter feelings (the most effective ads of all according to IPA data), but also that one cannot expect consumers to be able to reliably report their emotional responses to ads in a typical direct questioning survey. Goode argues that what you hear played back in traditional pre-testing is explicit communication (the concepts that people consciously attribute to the ad), whereas implicit communication (what they have retained about the brand but are not able to attribute as having originated from the ad) is overlooked. Moreover, implicit communication typically takes a day to ‘sink in’ to peoples’ brand memories and so no survey technique at the time of viewing is likely to be able to gauge this implicit effect.

Unfortunately for ads intended to alter feelings, Goode finds that much of the commercially valuable effect occurs implicitly and therefore such ads tend to suffer at the hands of conventional pre-testing techniques. Goode’s company (Cogresearch) have developed an ingenious technique for circumventing this problem. They measure peoples’ brand associations before viewing an ad, then the next day they ask respondents essentially two things: what they now associate with the brand as a result of seeing the ad (which with a bit of mathematical modelling gives a measure of explicit communication) plus what they used to associate with the brand before seeing the ad. This is where is gets intriguing, because for predominantly implicit feelings-directed ads there can be big differences between what people said the day before and what they now think they used to believe. They of course are not aware of this shift and so could never report it, but it is real and is accounted for by the implicit communication of the ad. It can reveal hidden strengths of the ad that the explicit measure is unable to capture.

In a revealing parallel with Brainjuicer’s experiment (reported in my last post) Cogresearch are finding that traditional pre-testing techniques tend to favour explicit ads (i.e. ones that do not have a predominantly emotional modus operandum). Given that the IPA data suggests that these tend to be less profitable than predominantly implicit emotional campaigns, it is easy to see why there might be a conflict between traditional pre-testing and top-box profitability growth.
 
If you are interested in reading more about implicit testing and the problems of direct questioning (albeit applied differently to a social issues context), then read Goode’s recent MRS paper. And finally, again with the future in mind, it is instructive to read how traditional pre-testing assumptions and metrics had to be ditched when it came to developing the Cadbury Gorilla film. Vive la révolution.