Artificial intelligence is growing in usage and capability, but it is still mostly a black box without guiding principles; Kinetic's Benjamin Lord argues that AI needs to develop ethical standards as it progresses.

AI is dramatically changing the way we find and buy products. Brands have traditionally relied on targeted communications to stand out for customers, but they need to adapt to a world where people will fulfil their needs by simply chatting with Alexa or scanning an object with their smartphone’s camera – no content involved.

People seem to be fine for now with this kind of transaction-directed AI. After all, whether it’s a voice assistant like Siri or a pattern recognition technology like Shazam, it’s doing the job for you and saving you time. It’s working so well that industry analysts expect that visual and voice search will be generating 30% of e-commerce revenue by 2021, with half of the world’s businesses spending more on chatbots than any other mobile app development.

But many believe that as quickly as the application of AI in marketing is exploding, it’s also growing out of control.

The rise of Dark AI

Earlier this year, developers at Facebook had to shut down a pair of bots after they discovered the two machines had created their own unintelligible language, which the machines were using to exchange messages with one another – a cautionary tale of what the future might hold. It may not have been an imminent threat to mankind, but it was a spine-chilling reminder of robots’ ability to adapt along with the information it processes.

And let’s not forget it was an army of chatbots that came under fire for rigging the U.S. election, using social media to proliferate fake news and hate messages, not to mention – more recently – AI algorithms have been suggesting bomb-making components to Amazon shoppers and promoting gender inequalities in employment postings.

People are starting to realize AI’s everywhere and its great benefits may not be the whole story. As concerns over advertising technology go mainstream, there are warning signs raising concerns around the ethics of AI that may prove to be an obstacle to its progress.

AI needs ethical standards

There’s a big difference between personalizing content to make it useful, and manipulating the psychology of people. Attempts to filter fake news or anti-Semitic ads from our feeds won’t cut it.

Mattel is the first real casualty of this push back. The toymaker was recently forced to cancel its plans to develop an AI-powered babysitter called Aristotle after complaints that “young children should not be guinea pigs for AI experiments” poured in from child associations, psychologists, and politicians alike. So, while most advertising issues with AI so far have revolved around data privacy, we can expect more FCC “psychological” regulations to be forthcoming.

Humans, of course, are going to control the limits to AI—at the end of the day, we can always turn the machine off. But instead, marketers could take this opportunity to own the ethical narrative on AI by establishing their own standards for its principled use today. In fact, it’s been the message all along. Everyone from Elon Musk to Bill Gates and John Giannandrea have warned about AI’s inevitability, but also cautiously encouraged the industry to make sure AI is implemented in the right way.

Predictive Ethics

The problem is likely its solution, in that with AI’s ability to gather and apply data that captures human sentiment, the technology itself will be able to predict at what point its application goes too far. 

MIT scientists recently created a platform that generates human perspective on the moral decisions made by AI, such as self-driving cars. The first clinical trial modelled rather morbid scenarios, such as whether the car should crash into five pedestrians or instead adjust course to hit a cement barricade, killing the car’s sole driver. But the idea is that – over time as AI accumulates and understands the deeper meaning of data – it will reach a point where it will be able to make and execute moral decisions with greater efficiency than a human.

This kind of perspective transcends the topic of AI to any number of social issues, which means advertisers and technology companies can - and should - show society how this online data mining is not only being used commercially, but also purposefully.