As many of you will know, the British Polling Council (BPC) and MRS have launched an inquiry into the performance of the opinion polls in the UK preceding the May general election. A distinguished panel of experts has been appointed, chaired by Patrick Sturgis (U. Southampton and Director of the National Centre for Research Methods). Key differences between the this inquiry and the one set up by MRS in 1992 are firstly, the 2015 panel is totally independent from the polling sector, comprising mainly academics (see the BPC website for details), whereas in 1992 leading pollsters predominated. Secondly, the final report was not published until July 1994 (with an initial view by June 1992), but the latest panel hope to publish their report in early March 2016.
Initial open meeting
The BPC/MRS hosted an initial open meeting, run by the National Centre for Research Methods, on the afternoon of June 19th, held appropriately at the Royal Statistical Society in London, and on the day that the possibility of a Bill in Parliament to limit polling in the run-up to future elections was mooted. The agenda for the meeting mainly comprised representatives of each of the main polling companies presenting their interpretation of the situation (ICM, Opinium, ComRes, Survation, Ipsos-Mori, YouGov, Populus), and outlining their plans for internal enquiries. All started with a mea culpa statement, and agreed that being within 'sampling error' (whatever that is, or measured, in the way that samples are drawn today) was not a good enough excuse in predicting the outcome. It was a very sackcloth and ashes affair – John Curtice (U. Strathclyde), the BPC President, in his opening address, stressing the impact the polls had on how the campaign was fought, but with the caveat that any detailed analysis of this impact has as yet not emerged, and is also outside the remit of the inquiry which is focussing on methodological issues.
Is Britain 'a nation of liars?'
So are we 'a nation of liars?', as posed by Ivor Crewe in his 1993 JMRS paper analysing the 1992 situation and my April IJMR Landmark Paper selection (JMRS Vol 35 No 4; IJMR Landmark Paper). There was little current evidence to support a late swing of any significance, based on the results of post-election polls, but do the recall polls suffer from the same methodological problems as the pre-election polls?
Prediction is difficult, as physicist Niels Bohr once said, especially about the future.
In fact, we don't know if he said it. He may have, but it has no textual reference. It just seems to have been associated with his name. It's not even hearsay. It's what I like to call a 'fauxtation' - a line that's falsely attributed to someone famously smart or creative.
All of which goes to show it's very difficult to know if someone actually said something, or what they meant, or if they are lying - which is relevant to the most recent failure of market research to predict the outcome of the UK General Election.
The election results came as something of a surprise since the Conservative Party won by a substantial margin, yet every single published poll had predicted that Labour was running neck and neck. The incorrect predictions were so noteworthy that they now even have their own Wikipedia listing.
Recently we've been helping some of our clients assess their latest ad campaign. It's a great little campaign, which seems to have helped boost sales and market share, but evaluation is complicated because of the number of media used. The bulk of the budget was spent on traditional media, particularly TV and outdoor, but the remainder was spent on a mix of digital channels, mobile messaging and PR stunts. Working out the contribution of each is a challenge.
At the first meeting, our clients presented a detailed review of each strand. And something immediately struck us as odd. Traditional media, which accounted for almost threequarters of the budget, were dismissed in about 15 minutes. Then nearly two hours was devoted to the smaller, newer media. In fact, it almost seemed that the less money was spent on a channel, the more attention it got.
One reason was that there was simply more data on the newer, digital channels. Slide after slide was presented, crowded with figures on the number of views, clicks, likes, shares, tweets, followers, comments, and uploads. Dwell times and conversion metrics were analysed in exquisite detail. But for TV, only one number was presented: the cost. This is a clear example of the data tail wagging the evaluation dog. Rather than focusing on what was important (i.e. the media where most money was at stake), we found ourselves focusing on what was easy to measure.
GreenBook provided a 'Sneak Peek' of the findings from their latest GRIT (Research Industry Trends) survey in a webinar on May 14th, with a panel of research sector leaders to discuss the key points.
According to the survey, the biggest challenge is around technology. At the heart of the findings, and of the discussion, was the decades old dilemma of what the market research sector should be. Our heritage, and much of our skill-set, expertise and experience is vested in methodologies for collecting primary data to provide fresh insights into consumer, and citizen, behaviour. Not simply to identify the 'who', 'what', 'where' and 'when', but importantly the 'why'.
However, our clients seem to be busy with analytics and data integration in the 'new' world of big data, questioning what we might bring to the party. This is, of course, NOT a new finding – it emerged in the 1980s as database marketing enabled clients to undertake their own 'research' by either recruiting analysts, or turning to the new breed of marketing analytical companies that were getting off the ground.
No issue divides the creative community quite like the contribution of data to the creative process. The term 'Big Data' is apt to foment fits of apoplexy in some, who view data as the enemy at the gates, a dagger to the heart of creativity. Others see data as a panacea for all marketing's ills, in a Holy Grail quest to form one-to-one relationships with customers, eliminating all marketing wastage.
Somewhere between these extremes there is a consensus view that the mass of data now being generated from consumer activity can be a positive if channelled appropriately. Data can assist the creative process, if it isn't allowed to suppress human instinct and ingenuity. It can help develop the big idea, or the little idea, as long as it doesn't frustrate the advent of a 'eureka light bulb moment'. Data can finesse the media strategy, as long as the human skill in media selection is not overridden by the attraction of the algorithmic efficiencies inherent in programmatic media buying.
But there is a tension between data and creativity. This tension is identified in the entries to this year's Admap Prize, which posed the question 'Does Big Data Inspire or Hinder Creative Thinking?' I think the question gets to the heart of the debate and anxiety around data.
Some years ago, we met a client who was wildly excited about large customer data sets. "It's the granularity that's so amazing," he enthused. "For instance, people who shop in petrol stations on a Thursday…" and so it went on. Eventually, we asked a simple question: what was happening to market share? He seemed slightly annoyed. Market share wasn't relevant for a complex business like his, he said. He wasn't selling baked beans!
So we analysed his data in a different way, not drilling down into the detail but aggregating up to find the trends. And we quickly found patterns that his data-mining techniques had missed. We identified six key measures of market share, and all were in long-term decline.
It is commonly assumed that the more data you have, the better. But in our experience, the more granular the data, the harder it is to see the wood for the trees. Digital data is often daily or hourly, which makes it easier to measure short-term marketing effects. But it makes it harder to measure long-term effects, which get lost in the noise. Similarly, if you analyse sales by store and SKU, the effects of promotions seem huge. But brand-level data shows they are much smaller once cannibalisation and store-switching are taken into account.
This post is by David T. Scott, CMO of Gigya.
As a marketer, nothing is more rewarding or lucrative than knowing exactly who your customers are, and being able to provide them with what they want, when they want it, and how they want it. As a customer, nothing can be more frustrating than receiving marketing communications from brands that disregard all of this.
Achieving a long-lasting business-to-customer relationship requires a significant amount of data-driven intelligence, as well as the willingness to embrace new advances in marketing and data management technologies. According to Teradata, just 18 per cent of marketers say they have a single, integrated view of customer actions.
Some businesses are able to thrive by understanding their customers on a granular level, while others struggle to paint a picture beyond simple demographic data. However, two things are abundantly clear. Firstly, the more brands learn about their customers' identities, the more effective they are at marketing to them. Secondly, irrelevant marketing communications are a waste of both time and money at best. At worst, these irrelevant messages can even cause offence. In order to best understand customers and avoid such instances, organisations must break through the identity barrier and market in a more personalised fashion.
This post is by Helen Rose, head of the7stars' Lightbox.
The Sun's 'Well Hung' splash on May 7 left no doubt that Britain was heading for a hung parliament by morning. According to polls, the UK was gearing up for the tightest election in decades. By May 8, the Conservatives had won by majority.
Pollsters are now facing a "post-mortem", launched by the British Polling Council, to determine why their predictions, which vastly underestimated the Conservative's vote share while simultaneously overestimating Labour's, fell so short of the mark.
A pre-election study by the7stars' research and insight division, Lightbox, of over 1,000 18-24-year-olds revealed similar results to early polls, with Labour coming out the clear frontrunners taking 30% of the millennial vote. The study also found 80% said they planned to vote, far more than the 66.1% who actually turned up on Election Day. In short, the polls across the board didn't come close to reflecting the actual results. So what went wrong?
The UK polling industry is currently tearing itself apart over its failure to predict last week’s general election result. Basically, the (mainly online) polls showed both main parties – the Conservatives, led by David Cameron and Labour, led by Ed Miliband – polling at around 34%, yet it was Cameron who won by a margin (37% to 31%) too great to be explained by statistical error. There have already been plenty of theories advanced, including differential turnout figures, and ‘late swings’ (a convenient myth in my view). Instead I want to focus on an issue that has been a hot topic in the commercial MR world for at least a decade now: Are we asking the right questions?
Mark Earls (author of Herd and most recently, Copy Copy Copy) once challenged the market research industry to ‘stop asking silly questions of unreliable witnesses…or at least stop listening to the answers’. Ouch! I thought this harsh because some of us in MR twigged some time ago that people do not always answer the question we think we’re asking them.
People don’t usually ‘lie’ in surveys (why should they?), but often they don’t know their own minds, and sometimes they’re really answering a different question to the one we’re asking. Thus some may interpret a purchase intent question as a kind of ‘brand liking’ scale – I’ll say I’ll buy it because I like it, but I don’t really know if I will. Often we think we’re measuring behaviour when what we’re really measuring is attitude, or a vague disposition.
Are the traditional tools of market research – surveys with explicit, direct questions – still up to the job of measuring brands in the new era? The explosion of new understanding about how the mind works could not have been foreseen by the founders of market research, back in the 50s, but modern practitioners have less excuse for still using more or less the same approaches. Traditional (System 2) methods still dominate: researchers still ask direct questions (and people still answer them), but any marketer or MR professional with even a smattering of knowledge of recent developments in mind science would surely ask: Is that all there is?