Cambridge Analytica has shaped people’s views of psychological targeting. Professor Sandra Matz argues for an ethical approach that can restore its reputation.

My world turned in December 2016. Suddenly, something that only a handful of academics and marketing futurists had cared about became front page news the world over. Something that helped plunge an already headline-stricken Facebook into further crisis. And something that brought the nascent field of research I work to mainstream attention.

Suddenly, we all wanted to know about psychological targeting – but for all the wrong reasons. I speak, of course, of the Cambridge Analytica scandal.

The company had allegedly created detailed psychological profiles of millions of US citizens during the 2016 Presidential election. It had purportedly used those profiles to target voters with Facebook ads that were highly personalised and essentially designed to discourage them from going to the ballots and voting for Hillary Clinton.

For all the fear and fury the media tempest brought, it also brought psychological profiling out of the shadows. This has huge potential to shape our lives for the better; let’s take the opportunity to debate how and why we should do so.

And we should debate – psychological targeting is here to stay, whether we like it or not, so it is critical it is not left only in the hands of those bad actors who are prepared to use these powerful techniques, covertly, for nefarious purposes. We don’t know for sure, and may never know, whether this is indeed what Cambridge Analytica used, but we do know that they could have done so. The likelihood that someone else is going to use it in an illegal or immoral way is high. It’s incumbent on us all to determine the road we take next.

Psychological targeting: a powerful new tool for a data–fuelled age

I often get asked whether psychological targeting is just dirty manipulation that unfairly changes the rules of engagement? Of course, in some contexts it most likely is (Cambridge Analytica), but it doesn’t have to be that way. It depends on how we use it. Just keep in mind that all communication in our interpersonal relationships is somewhat tailored – and you probably do so naturally. You don’t talk to a child the same way you would your boss or your partner. You’d speak to your outgoing friends in a different manner than your shyer friends.

To some extent psychological targeting merely replicates this in the online world at scale. Where once perhaps you used to be a number, we had a sense of what you did but not why; brands, companies, political parties can now have that deeper, more personal relationship with you, for the better.

Forget Cambridge Analytica for the time being, but imagine instead a politician such as Barack Obama using that technology to try to truly understand his voters. In the last US election some 40% of people stayed at home because they were completely disengaged with politics. Politicians are routinely accused of not caring about their constituents, not listening or speaking their language. Consider psychological targeting as a tool to engage and understand what you care about and it becomes apparent that it could be a huge opportunity to engage the population in politics again – because if almost half the population isn’t voting then we’re not really making the most of democracy.

Perhaps its use in politics is a step too far, too soon. Perhaps we collectively decide we don’t want psychological targeting anywhere near the democratic process. Fine – but it should be an informed discussion and decision. The alternative is knee-jerk reactions potentially banning its commercial use, for instance, whilst those bad actors continue to maliciously exploit it in the shadows.

The blurring lines between public and private

With the democratisation of the internet, everyone has the ability to both create and broadcast content. But, for example, once a post or picture is publicly shared, it becomes almost impossible to make it private again. Even if the original post has been deleted, it has left a digital record that is difficult if not impossible to remove – it may, for instance, have been publicly shared by others, as many a celebrity or politician has found to their cost.

More problematic yet is that data knowingly made public by an individual in one specific context can be transferred to another context and used to infer information that the owner never intended to reveal. For example, users do not expect their social media profiles to be used to infer political or sexual orientation. A user might be willing to share his or her data in order to receive personalised advertising for their favourite sporting events, for instance, but might be opposed in the context of political campaigns.

The privacy paradox and privacy by design

One potential solution to the privacy paradox is privacy by design: the integration of proactive privacy protection into the design, development and application of data systems and technologies. Policy makers should also consider regulation that directly addresses psychological profiling, perhaps, as suggested above, restricting its use in certain contexts such as political advertising.

In an ideal world, regulation would guarantee that – whatever the context – certain professional, legal and ethical standards are upheld, allowing users to navigate today’s complex privacy landscape more easily and with a higher level of trust.

In this world, approaches to privacy protection could be accompanied by a principle that is focused on opportunities rather than challenges: disclosure by choice.

Because, when implemented in an ethical way that puts the user first, predictive technologies such as psychological targeting have the potential to improve people’s lives.

Sandra Matz is a Professor at Columbia Business School and the co-author of Fast Forward Files Volume 2: Changing Perspective: Why everything will be different for generation next, available now on Amazon.