At IIEX Europe, WARC’s Sam Peña-Taylor takes the temperature of the research community as it looks to the rise of AI: what might it do for us and what might it say about us as people and professionals?
Artificial Intelligence is a story with the potential to become much bigger than tech, an inflection point that transforms how we live and work. Think when in January 2020, COVID-19 lockdowns were a far off story and we had no idea how different the world would look in a few weeks' or years' time. That kind of story.
The conversation at IIeX Europe (Amsterdam, March 2023) approached the topic at turns warily and enthusiastically. Among the insight community gathered here, there’s a recognition that this tech innovation goes far beyond the likes of VR or blockchain: a broad suite of technologies with myriad applications, and the kind of unpredictable energy of a revolution whose character and next chapters are still too complex to fathom.
Unlike other media-adjacent technologies that have raised alarm – what effect is social media having on mental health, for example – AI is raising questions of a different order. What will be the impact on working life, given ChatGPT’s capacity for stylistically flexible prose and advanced degree-level answers?
At the far end of the spectrum, there are even concerns about an existential threat to humanity itself: the notion of a rebellion of self-replicating and self-improving machines has come into view more clearly with GPT-4’s ability to (co)create solid code and the simulated reality of AI-generated images. What can it create by the time we reach GPT-10?
We’re a little way off, but these ideas really do bear thinking about as the technology now runs off at a giddy pace. I tend to believe that our current problem as humans in the face of machine intelligence is philosophical – have we prioritised the critical capabilities to think through what’s coming our way? Are we too quick to anthropomorphise? Is a word predictor even that useful? But if it is, what might it help us do?
AI as a usefully constrained test site
Some of the most compelling use cases deploy modern AI’s neural network techniques but on a much smaller scale and for a much more specific set of tasks.
Henkel, a consumer goods company, worked with partner aimpower to create a virtual human ‘brain’ that could test thousands of e-commerce assets at scale, based on their ability to capture and retain attention, and make them easier to process – in short, to test for creative effectiveness.
These assets, images and information about the company’s products that appear on e-commerce sites are hardly the peak of human creativity but they are numerous, including key product visuals, clean packs, secondary images, shelf displays, and in-store promotions. These are laborious and repetitive to make and are often created by different departments, which makes inconsistencies likely.
“We basically created a virtual consumer brain, that by being exposed to any stimulus out there can give back the feedback on those key effectiveness metrics,” explains Julia Saswito, CMO and head of strategy at aimpower. The ‘brain’ has been trained on a vast dataset of human behaviour to predict over 100 effectiveness KPIs.
It also allows the brand to align KPIs in the testing environment, which means “everybody shares the same set of KPIs,” across the organisation, notes Julia Wang, global e-commerce manager at Henkel.
Of course, it doesn’t just work on Henkel content, it can also test competitors’ content in order to benchmark performance in market. The system is now used by 800 global staff across the organisation, she adds, and has tested a total of 6,000 assets. “No longer do we need the gut feeling approach because now we actually have hard data,” says Wang.
This doesn’t replace shopper insight – the model needs training on real human data – but it does offer enormous scale as it begins to mix with generative AI systems. This poses some complicated questions, Saswito notes, not least the possibility for misusing intellectual property – problems that brands will need to start thinking about in the near term.
AI as a general tool
If we’re serious about understanding AI systems we have to approach them and seek to understand the relationship we may develop with them. This is part of the argument of C-Space’s associate director Asha Parmar, whose sober and thoughtful talk dealt with the negative potential – the perpetuation of biased patterns, the deluge of fake news and fake images that we face in the short term.
The whole idea is about thinking through how you collaborate with these systems, not with the assumption that they are “going to steal all our jobs”, but with the understanding that they are likely to become a significant part of professional life in ways that are yet unknown. So, “make sure that we’re working with them, not for them or against them, to really get the best out of them”. In short, by moving early to understand how our relationship to them might begin and then develop.
For me, Parmar’s most salient point was not necessarily about the often-brilliant new use cases that she talked through, but the generative possibilities that assistants like ChatGPT can have over human work, notably finding out where to start on a piece of work or project, or sketching out early structures around which to build. To paraphrase Parmar, it’s a lot better than the feeling of staring at a blank page.
But by a similar token, it’s vital that we understand what such systems can’t do. As a researcher, Parmar notes that humans still retain the advantage of being able to think in a “negative space,” and explore what is not or what is not yet.
But both during the talk and speaking to Parmar afterward, I was sceptical of this idea. Writing has always been, for me, an extension of and conduit for thinking. To borrow from Auden: “How can I know what I think till I see what I say?” I don’t believe I’m alone in this.
My feelings about AI are complicated: morbid fascination melds with genuine excitement; fear of becoming a paperclip coexists with the idea of getting a personal R2-D2 with whom I can share all the kind of dross and bullshit that runs through my mind before something worth telling other humans comes out – like a really clever pet (or at least a pet that knows how to string words together to sound clever).
Or, more usefully, as an unconstrained colleague with nothing else to do than to attend to your thinking, unencumbered by ideas of his or her own and capable of lending some basic structure to the messiness of thought in a matter of seconds. But ultimately, these tools are here and what comes next is as much about us and how we adapt as it is about them.