This post is by Jon Buss, Managing Director EMEA at Criteo.
Anyone working in the marketing industry knows all too well the necessity of proving the worth of corporate communications to those at the top. With competition increasing in all market sectors, businesses are starting to bring all activities down to the bottom line and qualitative measures of impact are no longer enough for the c-suite.
Attribution modelling appears at first to be a simple solution to the problem; introducing a method of measuring the financial impact of communications in terms of business objectives, such as revenue, profits, customer retention and new business. However, the process of measuring the effects of advertising, marketing and corporate messaging on the bottom line is not a simple task, and requires multiple tools and techniques in order to establish a quantitative representation.
Communications have traditionally been measured by qualitative means; including variables like the business' share of voice within the industry, the number of visits to the corporate website, click through rates and impressions. Whilst these are legitimate aspects of the marketer's toolbox, their importance rarely translates to the c-suite where executives speak in terms of financial return on investment (ROI). Therefore, attribution models provide marketers with a tool to assist in justifying their activities and budget in terms that can be clearly understood and appreciated by the decision makers of the organisation.
One major accusation levelled at the pollsters in their failure to correctly predict the likely outcome of the UK general election back in May was that their incorrect forecasts influenced both the intentions of voters, and the strategies of the political parties.
Recently, I've seen two further examples of how we have to think carefully about the consequences, intended or unintended, when conducting research.
The first echoes the issues surrounding political opinion polls. As you probably know, the heat is already being turned up under the 2016 presidential election in the USA, especially with the first round of TV debates by prospective Republican contenders' launching their primary campaigns producing a lot of contentious statements and debate. It's a blog post by John Dick (CEO CivicScience, Pittsburgh) warning pollsters of the dangers posed by the 'audience reach' of findings based on inadequate samples which can become disproportionately high due to media coverage. In support of his case, Dick cites two recent USA national surveys designed to study political attitudes of Republicans based on samples of 252 (Wall Street Journal) and 423 (Monmouth University).
This post is by Alex Kuhnel, Chief operating officer at Kantar Media TGI.
There has been much hand wringing recently in the digital advertising industry over the threat posed by ad blocking, as new software is launched promising to block ads on mobiles. Advertisers and trading desks worry how much take up will this witness and how they can fight back.
At the same time, a debate has been going on about whether the content of programmatic ads is up to scratch when compared to the quality of other types of advertising.
In fact, both worries tap into a deeper truth about programmatic, which is that cookie-based advertising's promise of targeting a browser regardless of where they are online, disregards the all-important match between an ad and the environment in which it appears. The weaker the connection between advertising and context, the less receptive the consumer is likely to be.
This post is by Karl Weaver, CEO of Data2Decisions.
Change is a good thing. It forces us to think differently and re-establish the norms we take for granted. For the creative industry, data and technology has been an explosive catalyst for change, forcing the uncomfortable debate about whether data and creativity can work together to produce not only more effective, but more emotionally engaging creative work. There were early distractions as the data ‘geeks’ and creatives were pitted against each other, but thankfully the debate about whether data helps or hinders creativity is nearing completion. The two worlds have well and truly collided and we are finally ‘doing’ the collaboration we’ve been talking about for so long. The results so far have been very promising.
Take artificial intelligence for example, one of the most exciting, if not frightening collisions of data, creativity and technology we’ve seen yet. The technology has advanced in leaps and bounds throughout the past decade, with investors pouring millions into robotics companies now bringing interactive and emotionally intelligent robots to consumers on masse. Earlier this year, Robot Pepper, a humanoid robot with the emotional capacity to understand and communicate with humans went on sale in Japan. Creators Aldebaran Robotics sold out 1,000 units priced at £1,107 each in less than a minute. The demand is real and the possibilities endless.
This post is by Chris Pinner, sponsorship analyst at Synergy Sponsorship.
Closing the Telegraph's Business of Sport article on 'The importance of social media in sport', Synergy CEO Tim Crow says rightsholders "need to focus less on selling price and impressions and much more on delivering engagement and value".
He's right – value metrics are the future. And with more words set to be published on Twitter in the next 2 years than in all books ever printed, the cost of getting social media measurement wrong – by using vanity metrics such as "likes" and "clicks" – is set to skyrocket. This blog aims to provide a quick guide to moving sponsorship towards better social media measurement.
The majority of data points available in off-the-shelf analytics packages are what author of The Lean Startup, Eric Reis, calls Vanity Metrics – they might make you feel good, but don't offer clear guidance on what actions to take. Put another way, they do not help make decisions on how to drive value. Since around 80% of companies use vanity metrics, it's clear that sponsorship must move from vanity to value in social media ASAP.
As many of you will know, the British Polling Council (BPC) and MRS have launched an inquiry into the performance of the opinion polls in the UK preceding the May general election. A distinguished panel of experts has been appointed, chaired by Patrick Sturgis (U. Southampton and Director of the National Centre for Research Methods). Key differences between the this inquiry and the one set up by MRS in 1992 are firstly, the 2015 panel is totally independent from the polling sector, comprising mainly academics (see the BPC website for details), whereas in 1992 leading pollsters predominated. Secondly, the final report was not published until July 1994 (with an initial view by June 1992), but the latest panel hope to publish their report in early March 2016.
Initial open meeting
The BPC/MRS hosted an initial open meeting, run by the National Centre for Research Methods, on the afternoon of June 19th, held appropriately at the Royal Statistical Society in London, and on the day that the possibility of a Bill in Parliament to limit polling in the run-up to future elections was mooted. The agenda for the meeting mainly comprised representatives of each of the main polling companies presenting their interpretation of the situation (ICM, Opinium, ComRes, Survation, Ipsos-Mori, YouGov, Populus), and outlining their plans for internal enquiries. All started with a mea culpa statement, and agreed that being within 'sampling error' (whatever that is, or measured, in the way that samples are drawn today) was not a good enough excuse in predicting the outcome. It was a very sackcloth and ashes affair – John Curtice (U. Strathclyde), the BPC President, in his opening address, stressing the impact the polls had on how the campaign was fought, but with the caveat that any detailed analysis of this impact has as yet not emerged, and is also outside the remit of the inquiry which is focussing on methodological issues.
Is Britain 'a nation of liars?'
So are we 'a nation of liars?', as posed by Ivor Crewe in his 1993 JMRS paper analysing the 1992 situation and my April IJMR Landmark Paper selection (JMRS Vol 35 No 4; IJMR Landmark Paper). There was little current evidence to support a late swing of any significance, based on the results of post-election polls, but do the recall polls suffer from the same methodological problems as the pre-election polls?
Prediction is difficult, as physicist Niels Bohr once said, especially about the future.
In fact, we don't know if he said it. He may have, but it has no textual reference. It just seems to have been associated with his name. It's not even hearsay. It's what I like to call a 'fauxtation' - a line that's falsely attributed to someone famously smart or creative.
All of which goes to show it's very difficult to know if someone actually said something, or what they meant, or if they are lying - which is relevant to the most recent failure of market research to predict the outcome of the UK General Election.
The election results came as something of a surprise since the Conservative Party won by a substantial margin, yet every single published poll had predicted that Labour was running neck and neck. The incorrect predictions were so noteworthy that they now even have their own Wikipedia listing.
Recently we've been helping some of our clients assess their latest ad campaign. It's a great little campaign, which seems to have helped boost sales and market share, but evaluation is complicated because of the number of media used. The bulk of the budget was spent on traditional media, particularly TV and outdoor, but the remainder was spent on a mix of digital channels, mobile messaging and PR stunts. Working out the contribution of each is a challenge.
At the first meeting, our clients presented a detailed review of each strand. And something immediately struck us as odd. Traditional media, which accounted for almost threequarters of the budget, were dismissed in about 15 minutes. Then nearly two hours was devoted to the smaller, newer media. In fact, it almost seemed that the less money was spent on a channel, the more attention it got.
One reason was that there was simply more data on the newer, digital channels. Slide after slide was presented, crowded with figures on the number of views, clicks, likes, shares, tweets, followers, comments, and uploads. Dwell times and conversion metrics were analysed in exquisite detail. But for TV, only one number was presented: the cost. This is a clear example of the data tail wagging the evaluation dog. Rather than focusing on what was important (i.e. the media where most money was at stake), we found ourselves focusing on what was easy to measure.
GreenBook provided a 'Sneak Peek' of the findings from their latest GRIT (Research Industry Trends) survey in a webinar on May 14th, with a panel of research sector leaders to discuss the key points.
According to the survey, the biggest challenge is around technology. At the heart of the findings, and of the discussion, was the decades old dilemma of what the market research sector should be. Our heritage, and much of our skill-set, expertise and experience is vested in methodologies for collecting primary data to provide fresh insights into consumer, and citizen, behaviour. Not simply to identify the 'who', 'what', 'where' and 'when', but importantly the 'why'.
However, our clients seem to be busy with analytics and data integration in the 'new' world of big data, questioning what we might bring to the party. This is, of course, NOT a new finding – it emerged in the 1980s as database marketing enabled clients to undertake their own 'research' by either recruiting analysts, or turning to the new breed of marketing analytical companies that were getting off the ground.
No issue divides the creative community quite like the contribution of data to the creative process. The term 'Big Data' is apt to foment fits of apoplexy in some, who view data as the enemy at the gates, a dagger to the heart of creativity. Others see data as a panacea for all marketing's ills, in a Holy Grail quest to form one-to-one relationships with customers, eliminating all marketing wastage.
Somewhere between these extremes there is a consensus view that the mass of data now being generated from consumer activity can be a positive if channelled appropriately. Data can assist the creative process, if it isn't allowed to suppress human instinct and ingenuity. It can help develop the big idea, or the little idea, as long as it doesn't frustrate the advent of a 'eureka light bulb moment'. Data can finesse the media strategy, as long as the human skill in media selection is not overridden by the attraction of the algorithmic efficiencies inherent in programmatic media buying.
But there is a tension between data and creativity. This tension is identified in the entries to this year's Admap Prize, which posed the question 'Does Big Data Inspire or Hinder Creative Thinking?' I think the question gets to the heart of the debate and anxiety around data.