Eaon Pritchard argues that the big opportunity in the application of AIs and Machine Learning to communications is not just about smarter targeting and building a better mousetrap, but in developing a better more quantified understanding of consumer behavior unhampered by our own cognitive biases.

Artificial Intelligence and Machine Learning are two buzz phrases being used right now - often interchangeably - but they are not quite the same thing. For our purposes as advertisers, it’s enough to know that one is effectively an application of the other.

Machine Learning is a particular application of one AI, based around the idea that - given access to enough data - machines can learn for themselves. Put simply, a machine learning AI is essentially a system fueled by algorithms, and as these algorithms are exposed to new data they teach themselves and grow.

Basic Machine Learning applications can read and interpret text (making inferences about the tone of the text it is reading), all programmatic ad trading is applied AI, chuck in other applications like self-driving cars, Siri and rudimentary speech recognition and a lot of this kind of applied AI is all around us, now. But these examples are what the boffins would label ‘narrow’ AI.

AIs and PII (Personally Identifiable Information) are going to be far more useful in accurately sizing markets, uncovering the real sales and behavioural data and performing the necessary covert behavioural observation that allows us to group together bigger sets of consumers through shared insights.

As anyone with even a basic understanding of simple network theory will tell you, the value of a network increases as it grows bigger. A simple applied description of machine learning with personal information is described nicely for the lay person (or advertising practitioner) Kevin Kelly’s 2017 book ‘The Inevitable’ and in the chapter on ‘Cognifying’ (one of the 12 tech forces that he predicts will be the most important in the next couple of decades).

‘The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it, the more people who use it, the smarter it gets. And so on’

Kelly tells of a moment in 2002 when this became clear to him. While making conversation with assorted engineers at a private party within Google HQ he came to the realisation that we had been looking at our Silicon Valley overlords ultimate goals the wrong way round. Google were not interested in the application of AIs to make their core products like search better, it was OUR usage of search that was feeding Google’s AIs. Google was fundamentally an AI company.

Our usage feeds the AI. The more we use it the smarter it gets, and so on.

Today, smartphone data is obviously they key - about 90% of all these devices are uniquely identifiable with an individual – we can know almost the exact composition of a total audience, as well as where and when media is used. It’s also worth noting that the full-tilt expansion of personal media means that the next decade promises to bring new technologies with capabilities far beyond the abilities of our smartphones.

The mainstreaming of machine learning capabilities, will provide agencies with better building blocks for smarter campaigns, and constitutes something of a leap in marketing intelligence, but as we’ve noted before, simply turbo-boosting targeting and delivery of ads is not where the real potential for AI applications in communications lies. Even adding the benefit of population level behavioural data and insights we are still working with ‘narrow’ AIs.

Things start to get much more interesting when we can map human psychology onto the data.

We live in a modern world of complex social networks. We interact with hundreds of people each day, in both physical and virtual environments. Success in this environment means being best adapted to interacting with, and working with other people.

And getting what you want from others.

Each of us has things that annoy us and things that make us happy. We have become very skilled good at remembering other people’s preferences and they, ours.

But we are limited by our cognitive capacity, and our own cognitive biases. It takes a huge amount of cognitive effort to remember other people’s preferences, but the pay-offs are there when we get it right.

This skill evolved long ago in our ancestral past, one of many adaptations that shaped our minds into the way they are because these adaptations enabled our stone-age ancestors to succeed with their (and our) principal concerns, namely survival, reproducing, forming mutually beneficial alliances and looking after families.

When the anthropologist Robin Dunbar was trying to solve the problem of why primates (including humans) and other social species devote so much time and effort to this kind of ‘grooming’ behavior, he happened upon his eponymous number.

Dunbar’s number (around 150) described a theoretical limit to the number of people with whom any individual is able to sustain a stable or meaningful social relationship.

150 is a best case number and even in the age of digital social networks, the number of friends with whom you keep in touch, and groom, is likely to be significantly less than Dunbar’s number.

But for brands, companies and institutions – for whom the Holy Grail is to sustain stable relationships, keep in touch with and groom literally millions of consumers - the really big opportunities that the harnessing the tsunami of personally identifiable data and the power machine learning and other AI applications offer lie in these areas.

The ability to manage relationships with and remember the (often implicit and unarticulated) preferences, of millions of individuals with the same intimacy as these tight-knit groups of humans manage their own relationships, is the bridge that finally connects the technology, the data and the creativity.

To a degree, I’m carried by Kelly’s optimism when he proposes, ‘There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI.’

Take market research and add AI.

Take consumer psychology and add AI.

Take creativity and add AI.

So, in theory, machine learning and AIs do offer us much more than just the better mousetraps of targeting and delivery. The big opportunity lies in how these technologies will aid understanding of what people value, why they behave the way they do, and how people are thinking (rather than just what). This could bring new, previously hidden, perspectives to inform both the construction of creative interventions and understanding exactly where, when and how these interventions will have the most power.

In ‘The Evolution of Language’ Dunbar also notes that ‘Whenever person-to-person interaction is a necessary feature of the process (as in the striking of deals), the old and trusted cognitive mindsets will come into play. Suspicion of the unknown and the fear of being duped by untrustworthy strangers will continue to dictate our decisions…the lack of personalized contacts means that individuals lack that sense of personal commitment that makes the world of small groups go round’

Anyone who uses LinkedIn will be familiar with the words of the data-scientist W. Edwards Deming, which seem to pop up in my own feed at least twice a week.

‘Without data you are just another person with an opinion’.

Deming, quite rightly, demands the objective facts. And we have more facts and data at our disposal than at any time in human history.

However to complete the picture, and to take the opportunity that data and technology give for creativity, I propose an addendum to Deming’s thesis.

Without data you are just another person with an opinion? Correct.

But, without a coherent model of human behavior, you are just another person (or AI) with data.