Prediction is difficult, as physicist Niels Bohr once said, especially about the future.

In fact, we don't know if he said it. He may have, but it has no textual reference. It just seems to have been associated with his name. It's not even hearsay. It's what I like to call a 'fauxtation' - a line that's falsely attributed to someone famously smart or creative.

All of which goes to show it's very difficult to know if someone actually said something, or what they meant, or if they are lying - which is relevant to the most recent failure of market research to predict the outcome of the UK General Election.

The election results came as something of a surprise since the Conservative Party won by a substantial margin, yet every single published poll had predicted that Labour was running neck and neck. The incorrect predictions were so noteworthy that they now even have their own Wikipedia listing.

How then are we to understand this? Not just that the polls were wrong, but that they were so incredibly wrong, by such a large margin, and uniformly so? There are various lenses we can peer through to analyse market research: epistemological, sociological, psychological, methodological.

Let's start with a fundamental one. Asking people about what they are going to do in the future is not predictive. There are lots of reasons for this, some of which I cover in my book, but the core of the problem is that we are terrible predictors of our own behaviour. Most decisions we make, happen below the level of consciousness, and we are influenced by many things, including contextual drivers we are unaware of, the behaviour of other people, and our own false sense of self.

Asking someone whether or not they will be going to the gym that week, rather than the pub, for example, may not correlate well with actual behaviour. We like to think that we are rational and that metacognitive error is writ large in this type of polling.

Then there is the now famous Shy Tory story that says people were too embarrassed to admit they were going to vote Tory. That's what the market research industry told us as an explanation for the incorrect predictions in 1992. But it creates weird additional questions, such as why should they be embarrassed if they are pro-Tory: is that a reflection of the research industry's liberal leanings? And if the secret ballot at the exit poll is more accurate, why don't we change our polling methods to reflect that? And, if the industry was saying they got it wrong because people lie, then why doesn't that shake the industry to its very core? Or at least change its approach dramatically?

Individual behaviour does not scale up into group behaviour. Groups exhibit emergent properties that cannot be predicted from individual components. This social lens works in various directions. The polls themselves, as they are published, become signals of group behaviour which may affect individual behaviour through The Bandwagon Effect – our herding nature. Our tendency to follow the crowd is well documented and published polls have the effect of showing us social signals of what everyone else is doing, which in turn could change behaviour.

On the other hand, the incredible and incorrect consensus of the polls suggests 'herding' behaviour is happening among the polling companies themselves – results that don't fit consensus are ignored. In fact, one research company, Survation, claimed their final poll was right, but because it was so different to the consensus, they didn't publish it.

Market research fancies itself the commercial face of the social sciences, but this accession to rigour falls down at the hurdle I pointed out with the fauxtation – we simply don't know what people actually said. Survey companies rarely release their micro-data and so we don't know exactly how they get their numbers.

Any poll requires judgments about samples, sample sizes, weighting, phrasing and so on. The American Association of Public Opinion Research recently announced a transparency initiative to address these issues, but it has gained little traction.

So where does this leave us? Back at the beginning. Prediction is very difficult, and polling has been repeatedly demonstrated to be a particularly bad way of doing it, especially around elections. Maybe why it isn't predictive isn't the question – but rather why we continue to use it when we know it isn't. If you want to know what someone thinks they think, which can be very interesting, then research away. Just don't confuse opinion with prediction.