Les Binet and Sarah Carter get a little bit angry about some of the nonsense they hear around them... like the spurious authority of numbers.

The other day we did something we hadn't done for a very long time. A young client, new to quantitative testing, asked us to go through his pre-testing questionnaire for a proposed new ad. Did it cover the issues he was interested in he wondered?

It was an illuminating and faintly horrifying exercise. Like most people in this business, we spend a lot of time in advertising research debrief meetings. Slick PowerPoint charts flash up, the numbers looking scientific, authoritative and objective. But peer behind the curtain, and things look much less impressive.

For a start, if this was real science, researchers would be trying hard to replicate real viewing conditions. But our questionnaire research scenario bore no resemblance to real life watching. The ad was an animatic not a real ad, which respondents were to watch on their computer, alone and paying close attention to. Some would be invited to film themselves on their webcams – the mechanics of this making the situation even more odd, with respondents instructed as to how to sit, arrange their hair and acceptable lighting conditions. Real people, meanwhile, watch ads in distracted, 'lean back' mode, wearing onesies, relaxing with their families.

And if this was real science, then there would be some attempt to replicate real buying decision-making in the questionnaire. This was an FMCG brand, where a shopper probably spends less than four seconds on the buying decision in-store. But with 60 questions to answer, this research interrogation would take far, far longer, with the respondent bored witless by the end.

And that matters a lot. Psychological research has repeatedly shown that people's choices and answers to questions are heavily dependent on how much time you give them. Ask for a quick answer and you'll get an intuitive, 'System 1' response. Ask them a series of considered questions and you'll get a more rational, 'System T response.

A brilliant demonstration of this is the 'Wilson Spooner Jam Experiment', where respondents were instructed to taste a range of fruit jams. Asked to decide quickly which jam they liked, ordinary people were just as good as food critics at picking good jam. But when asked detailed questions about sweetness, colour, and so on, their intuitions crumbled and they made poorer choices.

So, given that our 'test ad' was for a product chosen quickly in an intuitive 'System 1' way, the long and exhaustive 'System 2' questionnaire may well give misleading feedback. Not least because of the nature of the questions asked. Clearly many had been cut-and-pasted from other questionnaires. But our brand was unusual, and many of the questions didn't apply to it at all. Add the fact that the questionnaire writer was obviously not a native English speaker, and the result was a tedious and baffling mess guaranteed to confuse and annoy the most patient of respondents. And that matters too. When people are bored and annoyed by the questionnaire, they transfer those feelings to the brands and ads in question.

So, even our most cursory glance at the questionnaire raised serious doubts about this research. But who considers all of this when presented with the spurious precision and certainty of polished charts of numbers?

And it led to us musing; isn't it odd that clients and agency people often attend qualitative groups to listen to respondents 'first hand', but never do we remember anyone asking to experience quantitative research in action. And why are qualitative discussion guides more likely to be pored over than quant questionnaires? Maybe, to misquote Bismarck, numbers are like sausages – it's better not to see them being made.

This article originally appeared in the May 2013 issue of Admap. Click here for subscription information.