At a time when it’s never been easier to watch behaviour from afar, we should remember that the most meaningful learnings about our customers come from getting up close and asking them, says Sam Salama, a Senior Research Executive at MountainView.

All marketers understand the importance of understanding their customers. But how best to do it accurately? Can traditional, survey-based market research really cut it?

The difficult truth is that people can’t always explain or describe their behaviour. This explains the infamous Red Bull focus group fiasco, the displays of Behavioural Science ‘irrationality’, and Steve Jobs’ famous comment that “people don’t know what they want until you show it to them.”

But this unpredictable human element shouldn’t encourage marketers to ditch surveys, stop asking customers questions, and rely solely on behavioural data.

Instead, it should be seen as an opportunity for marketers to develop realistic expectations about what survey research can really achieve, and how to conduct it effectively.

Research isn’t a system which can magically convert questions about the many disparate habits and choices of shoppers into perfect mathematical answers.

It is, above all, a useful tool for understanding people. Like all tools it needs to be handled delicately. And although research seems like a purely numerical practice, this starts with a sound understanding of human decision making.

Asking questions people can answer

An important truth about consumers is that they spend very little time thinking about brands. When it comes to buying brands, they rely on shortcuts to make the decision as quick and easy as possible. This is especially true for frequent low involvement purchases, where the shopper autopilot is most active.

And yet a lot of research assumes that consumers care deeply about brands – thoughtfully evaluating every purchase they make. Surveys routinely ask people to assign twenty personality traits to brands of toothpaste. They probe the impact of subtle packaging changes. They examine attitudes to a brand’s environmental policy or corporate values. Given how little consumers think about these brands, asking them for a detailed analysis is unlikely to yield helpful results.

Perhaps more problematic is researchers’ tendency to ask consumers to make predictions: How likely are they to buy a specific product? Will they enjoy advertising if it features a brand mascot? Psychologists know that we are bad enough at predicting how we will respond to major life events. So it should be no surprise that our ability to forecast brand related behaviour often misses the mark.

Proper research accepts these limitations, and focuses on what respondents can answer accurately. While not a perfect science, this means asking them to recall and not predict. It means focusing on topics that people care about and have established opinions about. And it means aligning the research context as closely as possible to the real world decision context.

Here it’s helpful to draw a parallel with general election exit polls. Why do they tend to be so accurate? It’s because voters are asked to fill in a replica ballot paper just after they leave the voting booth. In other words, they are asked to recall past behaviour, in a realistic setting, about a subject they have (mostly) thought about.

Acknowledging exaggerated claims

Reliable research takes into account another human trait: we like to say what sounds good. With little conscious effort, people will give a logical and socially acceptable response to questions even if their behaviour in the real world is radically different.

When Trinity Mirror media group asked consumers what they wanted in a new newspaper, they said content that was upbeat, optimistic and politically neutral. But when the group launched New Day – a newspaper that offered exactly that – readership was so low that the paper shut down within just 2 months. We tend to present the personality we would like to have, rather than the one we really have.

The solution is not to avoid asking these questions all together. Clever question wording techniques aside, the key learning is to not take respondents’ answers too literally. It requires separating what people say from what people mean, what they would like to be from what they are. When done right this process can lead to breakthrough insights, as the team behind Persil’s Dirt is Good campaign found out.

The Steve Jobs conundrum

If market research can produce useful results, why did one of the greatest marketers of all time hate it? Actually, it is a big misconception that Steve Jobs rejected market research. When a 2011 lawsuit with Samsung forced Apple to reveal the details of its famously private research policy, the public got to see the truth: the company loves research.

Apple conducts quarterly surveys to understand why consumers buy iPhones and not Androids. It analyses which demographics are most satisfied with its products. It compares attitudes across different countries.

In other words, Steve Jobs was deeply concerned about what his consumers thought. In fact, that same year of the lawsuit Apple launched a series of successful innovations that directly addressed the findings in the research.

A key reason why consumers bought an iPhone was the ability to easily transfer music across multiple devices. Apple launched iCloud.

So when Steve Jobs said that “people don’t know what they want until you show it to them,” he wasn’t condemning all research but correctly noting its limitations. He realised that, although consumers can express pain points and needs, they are generally bad at imagining and predicting breakthrough innovations. After all, if they could, they wouldn’t just be consumers.

The limits of behavioural data

Apple has access to more behavioural data than most companies on earth. The fact it still invests time and money in understanding perceptions reveals an important point that is missed by a simplistic reading of market research: behavioural data has its own drawbacks.

For starters, just like claimed data, it can be inaccurate. Just because it’s observed it doesn’t mean it’s complete. Even something as seemingly objective as EPOS data is tainted by the fact that not all retailers (like Aldi & Lidl) are willing to share their figures.

More importantly, behavioural data is largely unable to reveal people’s needs and motivations. If a new soft drink is seeing a decline in sales, panel data can show you the rate of decline, but it can’t tell you why this is happening. Without asking people directly, it is hard to diagnose the problem and find a solution.

This isn’t to say that behavioural data is worthless, but that relying on it exclusively can lead you in the wrong direction. If the 2003 Tesco research team had looked at behaviour and nothing else, they would have seen that gluten free products were underperforming at their stores and potentially de-listed them.

It was only through survey research they were able to see the whole picture: there was a big appetite for gluten free products, but consumers felt Tesco and other big retailers lacked the range of specialty shops. So rather than doing multiple trips, consumers preferred to purchase all their gluten free products from these smaller stores. Armed with this insight, Tesco launched their own Free From Range before their major competitors.