The conversation around data privacy has changed as a result of revelations surrounding a British data analysis company’s alleged misuse of Facebook users’ personal data for political targeting. WARC’s Lena Roland argues that businesses are long overdue a debate over a data ethics strategy.
The recent revelations that Cambridge Analytica, a UK data analytics firm, harvested millions of Facebook user profiles – without authorisation – to target and influence US voters in the 2016 Presidential election marks a turning point in the data privacy debate and puts the need for a data-ethics strategy firmly on the agenda.
In the connected world, data is an integral part of peoples' lives and digital data is increasingly proxy for an individual's identity. The scandal puts the spotlight on how the misuse of personal data can have significant influence on vast aspects of peoples' lives. From siloed newsfeeds to, arguably, who they elect to run their country.
The controversy also illuminates four stark realities around personal data:
- People have scant control over their data
- Facebook – and other tech giants - have too much power and too little responsibility
- Facebook has continually neglected to safeguard its users’ data and privacy
- Firms such as Cambridge Analytica use data in ways that are ethically questionable, possibly illegal
The fact is, most people do not know how, where, why, when, by whom and for what purposes their personal data is being used – or more aptly, misused.
While Mark Zuckerberg, founder of Facebook, insists he cares about users’ privacy, the fact is, he has spent the last 14 years apologising for his company's ‘data use’ blunders.
If a service is free, you are the product. In other words, data about you is being sold to advertisers. This is the dominant business model that underpins 'free' services such as Facebook and Google. However, this business model is now under intense scrutiny and seen as ethically questionable with some calling it “surveillance marketing”. And this should be of grave concern for brands, many of whom have spent years – and billions of dollars – building their brand equity, reputation and consumer trust.
Moreover, the marketing industry has come to borrow a hostile language: tracking, monitoring, profiling, targeting, scraping – this is the discourse of surveillance. It is also the discourse that marketers use every day.
As individuals wake up to the fact their data is valuable, they will start to ask more questions – and demand answers – about how it can be protected.
The empowered consumer
As consumers wake up, and wise-up, it is time for the marketing industry to wake up too.
Marketers have ongoing issues around brand safety, invalid/bot traffic, and ineffective targeting. The latter issue, prompted what author Doc Searls, called "the biggest boycott in human history” in the form of ad blockers.
Ad blocking, software that enables web users to block, filter or remove advertising content in a webpage, is on the rise. According to WARC Data there were 616m devices blocking ads in January 2017, up 25.5% year on year. Consumers are sending a clear message: stop “creepy” online ads that “stalk” them around the internet, stop serving irrelevant ads for irrelevant products, stop retargeting for products they may already have bought, and stop being intrusive. These tactics are ineffective - and annoying.
Being served irrelevant and annoying advertising messages is one thing. Being associated with psychograpically targeted messaging that plays on peoples’ hopes and fears in ways that might influence how they think and who they vote for, takes it to another level. And the covert nature in how this is done, the lack of transparency, makes it very murky indeed.
With the aim of taming the digital Wild West, later this week the General Data Protection Regulation will come into force, providing a framework for more ethical data processing practices. The changes required of businesses ahead of its implementation provide marketers with the opportunity to define their own data ethics strategy.