Recently we've been helping some of our clients assess their latest ad campaign. It's a great little campaign, which seems to have helped boost sales and market share, but evaluation is complicated because of the number of media used. The bulk of the budget was spent on traditional media, particularly TV and outdoor, but the remainder was spent on a mix of digital channels, mobile messaging and PR stunts. Working out the contribution of each is a challenge.
At the first meeting, our clients presented a detailed review of each strand. And something immediately struck us as odd. Traditional media, which accounted for almost threequarters of the budget, were dismissed in about 15 minutes. Then nearly two hours was devoted to the smaller, newer media. In fact, it almost seemed that the less money was spent on a channel, the more attention it got.
One reason was that there was simply more data on the newer, digital channels. Slide after slide was presented, crowded with figures on the number of views, clicks, likes, shares, tweets, followers, comments, and uploads. Dwell times and conversion metrics were analysed in exquisite detail. But for TV, only one number was presented: the cost. This is a clear example of the data tail wagging the evaluation dog. Rather than focusing on what was important (i.e. the media where most money was at stake), we found ourselves focusing on what was easy to measure.
GreenBook provided a 'Sneak Peek' of the findings from their latest GRIT (Research Industry Trends) survey in a webinar on May 14th, with a panel of research sector leaders to discuss the key points.
According to the survey, the biggest challenge is around technology. At the heart of the findings, and of the discussion, was the decades old dilemma of what the market research sector should be. Our heritage, and much of our skill-set, expertise and experience is vested in methodologies for collecting primary data to provide fresh insights into consumer, and citizen, behaviour. Not simply to identify the 'who', 'what', 'where' and 'when', but importantly the 'why'.
However, our clients seem to be busy with analytics and data integration in the 'new' world of big data, questioning what we might bring to the party. This is, of course, NOT a new finding – it emerged in the 1980s as database marketing enabled clients to undertake their own 'research' by either recruiting analysts, or turning to the new breed of marketing analytical companies that were getting off the ground.
No issue divides the creative community quite like the contribution of data to the creative process. The term 'Big Data' is apt to foment fits of apoplexy in some, who view data as the enemy at the gates, a dagger to the heart of creativity. Others see data as a panacea for all marketing's ills, in a Holy Grail quest to form one-to-one relationships with customers, eliminating all marketing wastage.
Somewhere between these extremes there is a consensus view that the mass of data now being generated from consumer activity can be a positive if channelled appropriately. Data can assist the creative process, if it isn't allowed to suppress human instinct and ingenuity. It can help develop the big idea, or the little idea, as long as it doesn't frustrate the advent of a 'eureka light bulb moment'. Data can finesse the media strategy, as long as the human skill in media selection is not overridden by the attraction of the algorithmic efficiencies inherent in programmatic media buying.
But there is a tension between data and creativity. This tension is identified in the entries to this year's Admap Prize, which posed the question 'Does Big Data Inspire or Hinder Creative Thinking?' I think the question gets to the heart of the debate and anxiety around data.
Some years ago, we met a client who was wildly excited about large customer data sets. "It's the granularity that's so amazing," he enthused. "For instance, people who shop in petrol stations on a Thursday…" and so it went on. Eventually, we asked a simple question: what was happening to market share? He seemed slightly annoyed. Market share wasn't relevant for a complex business like his, he said. He wasn't selling baked beans!
So we analysed his data in a different way, not drilling down into the detail but aggregating up to find the trends. And we quickly found patterns that his data-mining techniques had missed. We identified six key measures of market share, and all were in long-term decline.
It is commonly assumed that the more data you have, the better. But in our experience, the more granular the data, the harder it is to see the wood for the trees. Digital data is often daily or hourly, which makes it easier to measure short-term marketing effects. But it makes it harder to measure long-term effects, which get lost in the noise. Similarly, if you analyse sales by store and SKU, the effects of promotions seem huge. But brand-level data shows they are much smaller once cannibalisation and store-switching are taken into account.
This post is by David T. Scott, CMO of Gigya.
As a marketer, nothing is more rewarding or lucrative than knowing exactly who your customers are, and being able to provide them with what they want, when they want it, and how they want it. As a customer, nothing can be more frustrating than receiving marketing communications from brands that disregard all of this.
Achieving a long-lasting business-to-customer relationship requires a significant amount of data-driven intelligence, as well as the willingness to embrace new advances in marketing and data management technologies. According to Teradata, just 18 per cent of marketers say they have a single, integrated view of customer actions.
Some businesses are able to thrive by understanding their customers on a granular level, while others struggle to paint a picture beyond simple demographic data. However, two things are abundantly clear. Firstly, the more brands learn about their customers' identities, the more effective they are at marketing to them. Secondly, irrelevant marketing communications are a waste of both time and money at best. At worst, these irrelevant messages can even cause offence. In order to best understand customers and avoid such instances, organisations must break through the identity barrier and market in a more personalised fashion.
This post is by Helen Rose, head of the7stars' Lightbox.
The Sun's 'Well Hung' splash on May 7 left no doubt that Britain was heading for a hung parliament by morning. According to polls, the UK was gearing up for the tightest election in decades. By May 8, the Conservatives had won by majority.
Pollsters are now facing a "post-mortem", launched by the British Polling Council, to determine why their predictions, which vastly underestimated the Conservative's vote share while simultaneously overestimating Labour's, fell so short of the mark.
A pre-election study by the7stars' research and insight division, Lightbox, of over 1,000 18-24-year-olds revealed similar results to early polls, with Labour coming out the clear frontrunners taking 30% of the millennial vote. The study also found 80% said they planned to vote, far more than the 66.1% who actually turned up on Election Day. In short, the polls across the board didn't come close to reflecting the actual results. So what went wrong?
The UK polling industry is currently tearing itself apart over its failure to predict last week’s general election result. Basically, the (mainly online) polls showed both main parties – the Conservatives, led by David Cameron and Labour, led by Ed Miliband – polling at around 34%, yet it was Cameron who won by a margin (37% to 31%) too great to be explained by statistical error. There have already been plenty of theories advanced, including differential turnout figures, and ‘late swings’ (a convenient myth in my view). Instead I want to focus on an issue that has been a hot topic in the commercial MR world for at least a decade now: Are we asking the right questions?
Mark Earls (author of Herd and most recently, Copy Copy Copy) once challenged the market research industry to ‘stop asking silly questions of unreliable witnesses…or at least stop listening to the answers’. Ouch! I thought this harsh because some of us in MR twigged some time ago that people do not always answer the question we think we’re asking them.
People don’t usually ‘lie’ in surveys (why should they?), but often they don’t know their own minds, and sometimes they’re really answering a different question to the one we’re asking. Thus some may interpret a purchase intent question as a kind of ‘brand liking’ scale – I’ll say I’ll buy it because I like it, but I don’t really know if I will. Often we think we’re measuring behaviour when what we’re really measuring is attitude, or a vague disposition.
Are the traditional tools of market research – surveys with explicit, direct questions – still up to the job of measuring brands in the new era? The explosion of new understanding about how the mind works could not have been foreseen by the founders of market research, back in the 50s, but modern practitioners have less excuse for still using more or less the same approaches. Traditional (System 2) methods still dominate: researchers still ask direct questions (and people still answer them), but any marketer or MR professional with even a smattering of knowledge of recent developments in mind science would surely ask: Is that all there is?
This post is by Angela Canin, Senior Manager Development and Editor Research World at ESOMAR.
In researching this theme for the ESOMAR Summer Academy Seminar (1-4 June in Amsterdam) it's become apparent just how complex audiences have become. The implications for all stakeholders is immense and the shift in how, when, where and what to communicate has shifted completely in under a decade.
According to Ansgar Hoelscher, VP marketing intelligence & innovation at Beiersdorf "The old broadcasting paradigm is over. We have to establish a one-to-one connection with consumers and engage in dialogue. That means having something interesting and relevant for the other person – and that's not always the product itself. Relevant content is the name of the game. Relevant means interesting, exciting and useful for the consumer. It's the only way to have good one-to-one dialogue."
In the latest issue of IJMR, we are publishing three papers on the theme of measurement formats. The first is a comprehensive literature review, by Callegaro et al, that in addition to summarising 'best practice' in the search for 'truth' in data collection, also identifies gaps in current published knowledge in this field.
In particular, the authors discuss in detail the impact of using a forced-choice versus check-all formats. One major gap identified in the paper is that most research to date in this field covers research undertaken in English speaking countries, with limited cultural range.
However, our second paper on this theme by Revilla, starts to address this gap through research undertaken in Spanish speaking countries, comparing forced-choice and check-all methods in the search for 'truth'. The final paper by Rossiter and Dolnicar explores the theme from a brand-image measurement perspective, arguing the case for applying Level-free Forced-Choice Binary measures when undertaking research in that field.