In the pursuit of greater personalisation, we’re crossing the boundary between clever and creepy. We have reached advertising’s ‘uncanny valley’ - when technology becomes so human-like that it makes people uneasy – argues UM’s Michael Hanbury-Williams.

The recent Facebook and Cambridge Analytica scandal is a prime example of this – data being skimmed from users who were unaware and without their permission – but this is a symptom of a wider problem. Witness Alexa’s cackle and the ongoing paranoia that the likes of Amazon and Google are listening in on our conversations. There’s an entire thread on Reddit dedicated just to this . We’ve reached this ‘uncanny valley’ point, and we’re starting to make people uncomfortable, particularly large swathes of the general public who don’t know how their data is used by the advertising industry.

From personal experience I can attest to this. I’ve been talking to my girlfriend about dogs. I’ve never searched for dogs online, or mentioned this within the online world, but by a strange coincidence, I was served an ad for dog food. However, I work in digital. I know how this will have happened. My girlfriend could have searched for it on her phone while using my Wi-Fi, or I could have been near a popular dog-walking park while my phone was passing back my location history through 3rd party app SDKs, and a dog food company has used that to target me on the off-chance I own dog.

This is unsettling even for me, and I have a fairly good idea about how this data is collected and used. For the average consumer who doesn’t have this knowledge, this is more than just unsettling. It’s creepy.

The challenge for brands, then, is knowing when to draw the line when utilising data. How far is too far? Well, unhelpfully, the line is blurry and it shifts from brand to brand. Timing, location, the news agenda and so many other variables also have a part to play. One misplaced ad, or incorrect use of data, can shut down a potential sale and damage brand equity, should the intended target bite back through social channels.

Consumers ‘get’ personalised advertising, but on a very basic level. Most understand that advertisers and brands can track you and get hold of information showing what we’ve been searching for online, what we buy and where we buy it - and this lies on the right side of the line. However, using personal data like political beliefs, health or sexual orientation most certainly does not - despite users giving ostensible consent, and sharing this data with brands.

Using data that might be considered ‘personal’ can make customers feel uncomfortable. The act of using demographic signals such as gender to split out creative messaging in online ads is widespread throughout the industry, particularly in the programmatic world, but even this practice has its pitfalls. One brand that found this out the hard way is Urban Outfitters. What seemed like an innocuous and smart use of data - tracking a user’s gender and adapting its homepage to reflect this - led to a backlash against the brand for overstepping the mark and using data inappropriately.

A great way to get audiences on side is to make the personalised ads entertaining or useful. Spotify’s billboard campaign “Thanks 2016, it’s been weird” at the end of 2016 collated some of the funniest music playlists and listening facts from users, and was received really well by the public. So well in fact, that the brand repeated a very similar campaign a year later with its “2018 goals” campaign, which suggested new year’s resolutions based on listening behaviours.

However, while this may have appeared to be a smart template for others to follow, you can’t always predict how it’ll be received by your audience. Take Netflix, which took a similar approach to Spotify in a tweet at the end of last year: “To the 53 people who’ve watched A Christmas Prince every day for the past 18 days: Who hurt you?”. While some found it hilariously entertaining, the company received its fair share of backlash from the Twittersphere. Beyond the ‘shaming’ aspect, many felt it was disturbing that the platform kept such close tabs on its audience. It’s evident that even when being careful by sticking to ‘uncontroversial’ topics, brands can still get it badly wrong.

Getting personalised advertising right is a real art. Concern over the misuse of data by platforms and advertisers is now one of the industry’s largest immediate problems, and it has reinforced the need for new measures to police the use of data both internally and externally. With GDPR regulations coming into force in May, it’s now even more important than ever that brands are sensitive to customers’ privacy.

Self-regulation is in digital businesses’ own interests and Google has recently started offering Chrome users a way to report an ad for “knowing too much”. It now also offers the means to block individual advertisers and specific ads on Google searches, YouTube, Gmail and independent sites. Amazon is taking a different approach in letting brands tap into its recommendation engine technology. By opening up its database service, Neptune, Amazon is helping its customers better understand consumer behaviour to get personalisation right in the first place.

Clearly, the tension between what we want to do as advertisers, use data in smarter ways to drive business growth, and what our audience (and prospective customers) wants the advertising they see to be – relevant, timely, and occasionally entertaining if possible - is one that isn’t going away any time soon. As a result, we need to show that we have our customer’s best interests at heart with the advertising we create. We can’t be blinkered by data and we can’t creep them out. We need to keep intact the human element of what we do - because by reaching our uncanny valley, we run the risk of making ads that alienate the people we’re ultimately trying to persuade. We risk losing those people for good.