A recent Forbes magazine article on social media with the headline 'How valuable are heavy social media users anyway?' reported that heavy social media users are less likely to buy online than other social media users. When they do, they spend less money too.
Like us, you may have thought this didn't intuitively feel right. Let's look at the analysis in more detail. Plotting heavy, medium and light social media users' buying data, researchers showed higher social media usage correlated with less online buying and lower spends. So they deduced, 'heavy social use doesn't translate to desired behaviours'.
This may or may not be true. But it is a nice example of a common marketing research problem – the myth that a correlation 'proves' something to be true. This can lead to muddled and sometimes dangerous thinking.
We have noticed quantitative research companies raise statistical significance issues far less nowadays. Charts used to be peppered with coloured asterisks warning 'small sample size'. But in debriefs now, we often have to query sample size and significance. And presenters seem surprised and a bit irritated to be asked. Planners aren't routinely trained to know this sort of stuff these days either.
Forbes doesn't mention statistical significance, but a quick analysis suggests the correlations are almost certainly significant at the 95% confidence level researchers typically use. But that raises another issue. Is this 95% level right?
When something is 'significant at 95% confidence', we mean there's only a 5% chance that this result is statistical fluke. Once, this was possibly appropriate, but now, researchers have more data and more computing power to analyse it. 'Data mining' can churn out hundreds of instant correlations, meaning fluke results become much more common.
This issue is particularly important when research is fuelled by PR agendas. Ever wondered why papers are full of articles 'proving' health benefits of various 'super foods'? Run enough correlations, and you'll always find one making your product look good. Some will even appear statistically significant. But the results are usually a fluke, and are rarely replicated.
Natural sciences researchers are familiar with this problem. They raise the standard of proof when running large numbers of correlations. Particle physicists (data miners par excellence) don't treat results as statistically significant unless there's under a one in two million chance that it's a fluke (the "5 Sigma" standard). Maybe a bit extreme, but we in our fields do need to raise our game a bit. And, as all good planners should know, correlation doesn't equal causation. And causation is not always what you think it is.
Sometimes the arrow of causality goes the opposite way to expectation. Research routinely shows that people who are aware of communication from brand X are more likely to buy that brand. Sometimes used as evidence that communication drives sales, in fact the causality usually runs the other way – buying brand X makes you more likely to notice its communications. This phenomenon, (the so-called Rosser Reeves effect) has been known for decades, yet is still routinely used to 'prove' communication effectiveness (most recently to justify social media use).
And sometimes both factors correlate with a third, hidden factor – the real explanation for what's happening. This may be a problem with the Forbes research. Heavy social media users tend to be younger, and may also have more time on their hands. Such people tend to have less money, and this may explain why they spend less.
So, next time you see a statistical correlation offered as 'proof' of something, be sceptical. Our favourite spurious example comes from an economics research paper published this summer. Entitled The Male Organ and Economic Growth – Does Size Matter?, it went on to 'prove' that a country's national incomes is correlated with average penis size. We all know that can't be true. Don't we?