It feels impossible to avoid AI in marketing circles at the moment. Sam Peña-Taylor, present at a recent Meta event, takes the tech temperature.
We have been here before. At least once with artificial intelligence (at the dawn of the programmatic age), but also with the metaverse, extended/mixed/virtual reality (too many times to count), and voice assistants, to name a few.
Watchers of the ad-adjacent technology world might recognise this phase: giddy conflations of separate terms, comfort in the blur of ‘opportunities’ and ‘potential’, extensive use of ‘we’re seeing’. Last year, this was Web3’s fuzzy manifestation; this year, it’s something much more interesting and more dangerous.
Technological breakthroughs have suffered from the legacy and televisual magnificence of the 1969 moon landings, which were a global culmination of human achievement for all to see. But this is probably the wrong way to envisage the real breakthroughs, said Meta’s Group Director Martin Harbeck at an event in Central London; instead they’re more like Tim Berners-Lee’s invention of the world-wide web, he suggested. At the time, some colleagues didn’t think much of the idea; but paradigm shifts can be difficult to perceive just before they happen. Then everything changes.
Artificial Intelligence finally looks very smart in ways that non-specialists can comprehend. ChatGPT exploded onto the scene late last year, garnering 100 million users for its web app in just two months, which makes it the fastest-growing web app ever.
Suddenly, here was a chatbot that wowed users with its smartness, its ability to be funny, its problem-solving uses (this week, a colleague here at WARC figured out a couple of fantastically useful Excel macros using the technology). Even without access to the very latest information, given that its GPT-3 language model is trained only on data up to 2021, there are answers that are very difficult to find and that benefit from the personal touch. ChatGPT excels in this department.
This usefulness sparked an arms race in which long-term projects whose potential impact had necessitated extensive research now started to ship as actual products for people to use. Microsoft, which had engaged with the ChatGPT developer OpenAI sometime before, stepped up its investment in the company before revealing to the world its AI-enabled Bing. Google, fearing for its core money-maker in the form of search, launched Bard soon after.
Both companies suffered minor embarrassments when the competing bots said something wrong or even invented sources – a phenomenon that the AI field terms a ‘hallucination’. At one point, Bing went downright weird. These systems remain in their infancy, but they are likely to tighten up as well as grow in potency.
But like the moon landings, these systems are the fruits of long-term investments by some of the biggest companies on the planet, many of which have already integrated these systems into their operations. “AI is fundamental to what we do,” said Angie French, director of marketing science at Meta, at the same event. But now there are consumer-facing applications as well.
AIs are only called AIs until they work, Harbech observed. Once they work, they become image recognition engines, speech-to-text systems, filters. Most are of direct internal use. Meta’s Cicero AI emerged as a capable player of a conversational strategy game, Diplomacy, and lessons in games feed into the company’s own major drive into AI as the basis for its post-ATT ad strategy. More recent expansions into generative AI, such as its Make-a-video system, or the establishment of a top-level product group, point to the critical nature of getting into producing generative apps for people to use.
Of course, this isn’t a simple fix: these systems are very costly to provide, with some estimates suggesting natural language searching is seven times more expensive than a traditional search. And the problems don’t stop there: generative AI systems of this sort are prediction machines rather than calculators, and write plausible answers derivative of the data they have been fed, with often imperfect data leading to imperfect answers. Yet, is that so different from what most of us do?
These systems have not so much passed the Turing Test as much as they’ve skipped scepticism and seen many of its users willingly anthropomorphise them. Notice how often people will talk or write about ‘asking’ ChatGPT to do something, rather than ‘instructing’. There is an expectation of agency, of selfhood or even of will. Where this leaves the idea of virtual influencers, or synthetic human presences, is quite a startling idea.
At a practical level, potential users of generative systems need to start grappling with the idea of the training data that has nourished them, and who owns it. There are already lawsuits in motion asking the question about what is lawful fair use of content once the AI-processed result is commercialised. This is quite aside from what it means for the internet if the traffic-and-ads model of internet monetisation crumbles.
Ultimately, this technology is at all stages of the hype curve, simultaneously, says Stuart Bowden, Global Chief Strategy & Product Officer at Wavemaker. It has already revolutionised media planning, taking an incredibly laborious process and not only speeding it up but improving its potency. A piece of work that once took three graduates three weeks happens in days now.
But this is dangerous stuff. Asked about the people strategy – whether these systems were indeed the job stealers of nightmares – a chill passed through the room. There are many questions emerging that will be difficult to answer.
Now that predictive algorithms come for the generation of content, whether text, speech or video, they join the already successful distribution layer of media. The use cases are clear and obvious, threatening to obliterate any of the fun skills of work, leaving us all just doing emails unless we’re truly breaking new ground with totally new ideas about what it means to be human. Everything else can simply be prompted and generated in seconds.