This post is by Alex Kuhnel, Chief operating officer at Kantar Media TGI.
There has been much hand wringing recently in the digital advertising industry over the threat posed by ad blocking, as new software is launched promising to block ads on mobiles. Advertisers and trading desks worry how much take up will this witness and how they can fight back.
At the same time, a debate has been going on about whether the content of programmatic ads is up to scratch when compared to the quality of other types of advertising.
In fact, both worries tap into a deeper truth about programmatic, which is that cookie-based advertising's promise of targeting a browser regardless of where they are online, disregards the all-important match between an ad and the environment in which it appears. The weaker the connection between advertising and context, the less receptive the consumer is likely to be.
This post is by Karl Weaver, CEO of Data2Decisions.
Change is a good thing. It forces us to think differently and re-establish the norms we take for granted. For the creative industry, data and technology has been an explosive catalyst for change, forcing the uncomfortable debate about whether data and creativity can work together to produce not only more effective, but more emotionally engaging creative work. There were early distractions as the data ‘geeks’ and creatives were pitted against each other, but thankfully the debate about whether data helps or hinders creativity is nearing completion. The two worlds have well and truly collided and we are finally ‘doing’ the collaboration we’ve been talking about for so long. The results so far have been very promising.
Take artificial intelligence for example, one of the most exciting, if not frightening collisions of data, creativity and technology we’ve seen yet. The technology has advanced in leaps and bounds throughout the past decade, with investors pouring millions into robotics companies now bringing interactive and emotionally intelligent robots to consumers on masse. Earlier this year, Robot Pepper, a humanoid robot with the emotional capacity to understand and communicate with humans went on sale in Japan. Creators Aldebaran Robotics sold out 1,000 units priced at £1,107 each in less than a minute. The demand is real and the possibilities endless.
This post is by Chris Pinner, sponsorship analyst at Synergy Sponsorship.
Closing the Telegraph's Business of Sport article on 'The importance of social media in sport', Synergy CEO Tim Crow says rightsholders "need to focus less on selling price and impressions and much more on delivering engagement and value".
He's right – value metrics are the future. And with more words set to be published on Twitter in the next 2 years than in all books ever printed, the cost of getting social media measurement wrong – by using vanity metrics such as "likes" and "clicks" – is set to skyrocket. This blog aims to provide a quick guide to moving sponsorship towards better social media measurement.
The majority of data points available in off-the-shelf analytics packages are what author of The Lean Startup, Eric Reis, calls Vanity Metrics – they might make you feel good, but don't offer clear guidance on what actions to take. Put another way, they do not help make decisions on how to drive value. Since around 80% of companies use vanity metrics, it's clear that sponsorship must move from vanity to value in social media ASAP.
As many of you will know, the British Polling Council (BPC) and MRS have launched an inquiry into the performance of the opinion polls in the UK preceding the May general election. A distinguished panel of experts has been appointed, chaired by Patrick Sturgis (U. Southampton and Director of the National Centre for Research Methods). Key differences between the this inquiry and the one set up by MRS in 1992 are firstly, the 2015 panel is totally independent from the polling sector, comprising mainly academics (see the BPC website for details), whereas in 1992 leading pollsters predominated. Secondly, the final report was not published until July 1994 (with an initial view by June 1992), but the latest panel hope to publish their report in early March 2016.
Initial open meeting
The BPC/MRS hosted an initial open meeting, run by the National Centre for Research Methods, on the afternoon of June 19th, held appropriately at the Royal Statistical Society in London, and on the day that the possibility of a Bill in Parliament to limit polling in the run-up to future elections was mooted. The agenda for the meeting mainly comprised representatives of each of the main polling companies presenting their interpretation of the situation (ICM, Opinium, ComRes, Survation, Ipsos-Mori, YouGov, Populus), and outlining their plans for internal enquiries. All started with a mea culpa statement, and agreed that being within 'sampling error' (whatever that is, or measured, in the way that samples are drawn today) was not a good enough excuse in predicting the outcome. It was a very sackcloth and ashes affair – John Curtice (U. Strathclyde), the BPC President, in his opening address, stressing the impact the polls had on how the campaign was fought, but with the caveat that any detailed analysis of this impact has as yet not emerged, and is also outside the remit of the inquiry which is focussing on methodological issues.
Is Britain 'a nation of liars?'
So are we 'a nation of liars?', as posed by Ivor Crewe in his 1993 JMRS paper analysing the 1992 situation and my April IJMR Landmark Paper selection (JMRS Vol 35 No 4; IJMR Landmark Paper). There was little current evidence to support a late swing of any significance, based on the results of post-election polls, but do the recall polls suffer from the same methodological problems as the pre-election polls?
Prediction is difficult, as physicist Niels Bohr once said, especially about the future.
In fact, we don't know if he said it. He may have, but it has no textual reference. It just seems to have been associated with his name. It's not even hearsay. It's what I like to call a 'fauxtation' - a line that's falsely attributed to someone famously smart or creative.
All of which goes to show it's very difficult to know if someone actually said something, or what they meant, or if they are lying - which is relevant to the most recent failure of market research to predict the outcome of the UK General Election.
The election results came as something of a surprise since the Conservative Party won by a substantial margin, yet every single published poll had predicted that Labour was running neck and neck. The incorrect predictions were so noteworthy that they now even have their own Wikipedia listing.
Recently we've been helping some of our clients assess their latest ad campaign. It's a great little campaign, which seems to have helped boost sales and market share, but evaluation is complicated because of the number of media used. The bulk of the budget was spent on traditional media, particularly TV and outdoor, but the remainder was spent on a mix of digital channels, mobile messaging and PR stunts. Working out the contribution of each is a challenge.
At the first meeting, our clients presented a detailed review of each strand. And something immediately struck us as odd. Traditional media, which accounted for almost threequarters of the budget, were dismissed in about 15 minutes. Then nearly two hours was devoted to the smaller, newer media. In fact, it almost seemed that the less money was spent on a channel, the more attention it got.
One reason was that there was simply more data on the newer, digital channels. Slide after slide was presented, crowded with figures on the number of views, clicks, likes, shares, tweets, followers, comments, and uploads. Dwell times and conversion metrics were analysed in exquisite detail. But for TV, only one number was presented: the cost. This is a clear example of the data tail wagging the evaluation dog. Rather than focusing on what was important (i.e. the media where most money was at stake), we found ourselves focusing on what was easy to measure.
GreenBook provided a 'Sneak Peek' of the findings from their latest GRIT (Research Industry Trends) survey in a webinar on May 14th, with a panel of research sector leaders to discuss the key points.
According to the survey, the biggest challenge is around technology. At the heart of the findings, and of the discussion, was the decades old dilemma of what the market research sector should be. Our heritage, and much of our skill-set, expertise and experience is vested in methodologies for collecting primary data to provide fresh insights into consumer, and citizen, behaviour. Not simply to identify the 'who', 'what', 'where' and 'when', but importantly the 'why'.
However, our clients seem to be busy with analytics and data integration in the 'new' world of big data, questioning what we might bring to the party. This is, of course, NOT a new finding – it emerged in the 1980s as database marketing enabled clients to undertake their own 'research' by either recruiting analysts, or turning to the new breed of marketing analytical companies that were getting off the ground.
No issue divides the creative community quite like the contribution of data to the creative process. The term 'Big Data' is apt to foment fits of apoplexy in some, who view data as the enemy at the gates, a dagger to the heart of creativity. Others see data as a panacea for all marketing's ills, in a Holy Grail quest to form one-to-one relationships with customers, eliminating all marketing wastage.
Somewhere between these extremes there is a consensus view that the mass of data now being generated from consumer activity can be a positive if channelled appropriately. Data can assist the creative process, if it isn't allowed to suppress human instinct and ingenuity. It can help develop the big idea, or the little idea, as long as it doesn't frustrate the advent of a 'eureka light bulb moment'. Data can finesse the media strategy, as long as the human skill in media selection is not overridden by the attraction of the algorithmic efficiencies inherent in programmatic media buying.
But there is a tension between data and creativity. This tension is identified in the entries to this year's Admap Prize, which posed the question 'Does Big Data Inspire or Hinder Creative Thinking?' I think the question gets to the heart of the debate and anxiety around data.
Some years ago, we met a client who was wildly excited about large customer data sets. "It's the granularity that's so amazing," he enthused. "For instance, people who shop in petrol stations on a Thursday…" and so it went on. Eventually, we asked a simple question: what was happening to market share? He seemed slightly annoyed. Market share wasn't relevant for a complex business like his, he said. He wasn't selling baked beans!
So we analysed his data in a different way, not drilling down into the detail but aggregating up to find the trends. And we quickly found patterns that his data-mining techniques had missed. We identified six key measures of market share, and all were in long-term decline.
It is commonly assumed that the more data you have, the better. But in our experience, the more granular the data, the harder it is to see the wood for the trees. Digital data is often daily or hourly, which makes it easier to measure short-term marketing effects. But it makes it harder to measure long-term effects, which get lost in the noise. Similarly, if you analyse sales by store and SKU, the effects of promotions seem huge. But brand-level data shows they are much smaller once cannibalisation and store-switching are taken into account.
This post is by David T. Scott, CMO of Gigya.
As a marketer, nothing is more rewarding or lucrative than knowing exactly who your customers are, and being able to provide them with what they want, when they want it, and how they want it. As a customer, nothing can be more frustrating than receiving marketing communications from brands that disregard all of this.
Achieving a long-lasting business-to-customer relationship requires a significant amount of data-driven intelligence, as well as the willingness to embrace new advances in marketing and data management technologies. According to Teradata, just 18 per cent of marketers say they have a single, integrated view of customer actions.
Some businesses are able to thrive by understanding their customers on a granular level, while others struggle to paint a picture beyond simple demographic data. However, two things are abundantly clear. Firstly, the more brands learn about their customers' identities, the more effective they are at marketing to them. Secondly, irrelevant marketing communications are a waste of both time and money at best. At worst, these irrelevant messages can even cause offence. In order to best understand customers and avoid such instances, organisations must break through the identity barrier and market in a more personalised fashion.