The Warc Blog

The Warc Blog

Further arguments for a pre-testing revolution
 
Left Field
 
Left Field

From the response to my last post it would appear that not all of the market research community is convinced of the need to improve pre-testing techniques – evidently perfection already exists.

So before I share some other intriguing papers on new pre-testing approaches with you, I will revisit the IPA dataBank evidence on the need for a revolution in pre-testing techniques.

It’s not true to say that we don’t have the foggiest idea whether campaigns were pre-tested or not: in recent years all case study authors are asked if the campaign was pre-tested at all (whether or not they chose to “pad out a weak paper” with pre-test results). The proportion of these campaigns that were pre-tested is very close to the proportion for earlier campaigns derived from whether pre-testing was referred to in the paper. So unless there has been a dramatic decline in pre-testing in recent years, it appears that there aren’t legions of strong, highly profitable case studies that failed to mention the pre-testing results and so might have skewed the findings against pre-testing. Why would they omit persuasive evidence unless the campaign had failed the pre-test? Although we know this has happened, it can only be rarely, since few campaigns that fail pre-tests get the chance to prove themselves in the marketplace.
 
Nevertheless let’s examine the 124 recent case studies where we can be certain of whether or not the campaign was pre-tested. The judgement of pre-testing is not as harsh as before: we find that non pre-tested campaigns are around 1.6 times as likely to achieve top-box profit growth as pre-tested campaigns (21% vs 13%). But things are more complex than that, because what the IPA data actually suggests is that in contrast to pre-testing, tracking is very good for effectiveness. 
  
lf1.png
 
And so it appears that the majority of pre-tested IPA campaigns that achieved good profit growth did so because they benefited from tracking rather than pre-testing.
 
I’m not suggesting that the IPA data is perfect – there are a number of circumstantial factors that could affect the observed success rate of pre-testing. But even when you take account of these you still come back to the same stark conclusion that in general pre-testing appears to reduce your chances of achieving top box profitability growth. So those with an interest in improving pre-testing might want to read some of Dr Alastair Goode’s papers on implicit vs. explicit communication and how this affects pre-testing. In his thought-provoking 2007 IJMR article, Goode observes that “emotional responses occur completely independently of conscious recall” and that therefore “it is completely possible for an ad to increase a person's emotional response to itself or the brand it contains without them in any way being consciously aware that they have seen the ad.” The implications being not only that recall-based metrics are unreliable assessments of ads intended to alter feelings (the most effective ads of all according to IPA data), but also that one cannot expect consumers to be able to reliably report their emotional responses to ads in a typical direct questioning survey. Goode argues that what you hear played back in traditional pre-testing is explicit communication (the concepts that people consciously attribute to the ad), whereas implicit communication (what they have retained about the brand but are not able to attribute as having originated from the ad) is overlooked. Moreover, implicit communication typically takes a day to ‘sink in’ to peoples’ brand memories and so no survey technique at the time of viewing is likely to be able to gauge this implicit effect.

Unfortunately for ads intended to alter feelings, Goode finds that much of the commercially valuable effect occurs implicitly and therefore such ads tend to suffer at the hands of conventional pre-testing techniques. Goode’s company (Cogresearch) have developed an ingenious technique for circumventing this problem. They measure peoples’ brand associations before viewing an ad, then the next day they ask respondents essentially two things: what they now associate with the brand as a result of seeing the ad (which with a bit of mathematical modelling gives a measure of explicit communication) plus what they used to associate with the brand before seeing the ad. This is where is gets intriguing, because for predominantly implicit feelings-directed ads there can be big differences between what people said the day before and what they now think they used to believe. They of course are not aware of this shift and so could never report it, but it is real and is accounted for by the implicit communication of the ad. It can reveal hidden strengths of the ad that the explicit measure is unable to capture.

In a revealing parallel with Brainjuicer’s experiment (reported in my last post) Cogresearch are finding that traditional pre-testing techniques tend to favour explicit ads (i.e. ones that do not have a predominantly emotional modus operandum). Given that the IPA data suggests that these tend to be less profitable than predominantly implicit emotional campaigns, it is easy to see why there might be a conflict between traditional pre-testing and top-box profitability growth.
 
If you are interested in reading more about implicit testing and the problems of direct questioning (albeit applied differently to a social issues context), then read Goode’s recent MRS paper. And finally, again with the future in mind, it is instructive to read how traditional pre-testing assumptions and metrics had to be ditched when it came to developing the Cadbury Gorilla film. Vive la révolution.



Subjects: Marketing, Data

11 February 2010 15:50
 

There are 2 comments on this blog

(Want to have your say? Add your Comment)

User Image Hi Peter; I'd be interested in the proportion of cases your analysis says have conducted pre-testing. A couple of years ago I went through three volumes of the Effectiveness Awards papers, and compared them with our Link database. Of course, Link is not the only quantitative pretest in the UK, but it is by far the largest. I found that we had conducted Link on 7 - 8 times more of the campaigns than mentioned it in their papers! While a side issue to this, since you are referencing the IPA dataBank, I'll take this opportunity to recommend the analysis of the dataBnk conducted by you and Les Binet, as detailed in Marketing in the Era of Accountability. I think you've over-stretched the data on pretesting, but the rest of the book is a valuable and helpful contribution to an important topic based on good evidence.
Dominic T. 30 November 2010 at 10:16am
User Image I've just had a chance to read Goode's IJMR paper. You are right, it is thought provoking. It provoked two thoughts in me. The first was that this "ingenious technique" is bsed on comparing image endorsements among two sets of 17 people. Not surprisingly, he fails to disucuss isues of statistical reliability. none of the findings he produces is close to being statistically significant. Neither does he discuss the problems his technique faces when looking at explicit reinforcement of existing images; a common factor for most brand building advertising. Implicit communication is worth exploring; and there are some interesting techniques being developed. But just because someone throws the magic word "neuroscience" into a paper, it doesn't mean they can ignore statistical fundamantals.
Dominic T. 30 November 2010 at 10:16am
Comments IconAdd your comment here:
Email :  
Forename :  
Surname :  
RadEditor - HTML WYSIWYG Editor. MS Word-like content editing experience thanks to a rich set of formatting tools, dropdowns, dialogs, system modules and built-in spell-check.
RadEditor's components - toolbar, content area, modes and modules
   
Toolbar's wrapper 
 
Content area wrapper
RadEditor's bottom area: Design, Html and Preview modes, Statistics module and resize handle.
It contains RadEditor's Modes/views (HTML, Design and Preview), Statistics and Resizer
Editor Mode buttonsStatistics moduleEditor resizer
  
RadEditor's Modules - special tools used to provide extra information such as Tag Inspector, Real Time HTML Viewer, Tag Properties and other.
   
 

Blog Search

Archives

  • 2014
    • October (15)
    • September (18)
    • August (18)
    • July (25)
    • June (22)
    • May (23)
    • April (20)
    • March (14)
    • February (10)
    • January (5)
  • 2013
  • 2012
  • 2011
  • 2010