2023 has undoubtedly been the year that artificial intelligence dominated our headlines and head space, writes Ashleigh Steinhobel, Strategy Director at FutureBrand.
Whilst positive applications of AI are undoubtedly coming to the fore, the debate to date has largely revolved around whether the technology is ‘good’ or ‘bad’ – a sinister force to be stopped, or a positive advancement to be embraced.
Having feared the onset of almost every transformative technology, the marketing industry should know by now that this is simply another step in the path of progress. The most pertinent question is not, therefore, whether AI stays or goes, but is around how we ensure its application is positive, productive and in service of the greater good, instead of a destabilising force used to advance concentrations of power.
Regulation, of course, is key to this, but equally as important is the way in which AI is positioned, branded and communicated as a topic.
The state of play today
From wartime propaganda to COVID-19 safety advice, history is full of examples of how collective human sentiment and behaviour has been shaped by coordinated communication. And it makes perfect sense; we’re tribal beings, and language influences, guides and connects us.
The communication around AI to date has been fragmented and sporadic – moments of trepidation or excitement fueled by voices in academia and tech, all with individual opinions on how AI will help or hinder humanity. The messages are so confusing that even Marc Andreessen, the VC who famously said software will ‘eat the world,’ now says AI will save it.
Whilst transparency is of course helpful, the fear-mongering that unstructured discourse can engender is not. It’s important that the normalisation of AI in society is a strategic, rather than an organic, exercise – not least for the healthy sanity of society, but for the success of the technology itself.
Learning from the past
Cast your mind back a couple of years and you’ll no doubt recall the hype surrounding the metaverse which, since then, as a result of marketing-related failures, has slid towards insignificance. While AI is already progressed far ahead of the metaverse in terms of sophistication and acceptance, there are learnings we can gather from how the metaverse was – or wasn’t – positioned.
Firstly, the metaverse suffered from an overly ambitious but vague definition from day one. Objectively complicated concepts like NFTs, ‘interoperability’ and the blockchain hindered understanding, making it challenging for potential users – developers and consumer brands alike – to communicate their metaverse proposition effectively.
The identity of the metaverse was further fragmented by the different platforms competing for attention within it. Whilst the mateverse was intended to provide a seamless experience, the disconnect between systems – Meta, Roblox and Second Life to name a few – nullified the promise.
Thanks to unregulated privacy and security concerns, the metaverse also struggled to establish trust – a critical lever for traction. Data breaches, cyber-attacks and misuse of personal information eroded faith in the virtual world, while marketing efforts were unable to build it back up.
Finally, the metaverse’s association with complex and expensive virtual reality technology created a barrier to entry for many users. Its perception as a niche space reserved for tech enthusiasts deterred mainstream audiences, impeding the development of mass-market appeal.
What can we learn from this? What began as a promising realm of future possibility has been unsuccessful in gaining traction because the metaverse failed to establish four strategic pillars of communication: a clear definition, cohesive experience, trusted image and inclusive position.
The power of positioning
As strategic brand and communication experts, we know that perception of anything – a brand, a technology, an individual – is not forged by chance but rather by a series of strategic decisions and intentional framing. Crucially, we’re still at a point with AI where we can actively influence the creation of these pillars to accelerate positive perception.
For example, AI is currently seen as dissociated, machine-focused and artificial; is there an opportunity to bring AI closer to humanity, positioned as a product of – rather than threat to – our collective empathy? Or, through another lens, would de-humanising the technology reassure people with an enhanced sense of control?
We are past the point of deciding whether or not AI is here to stay; it is already transforming industries and redefining how we work. What’s crucial now is that we collectively harness the power of positioning in line with regulation to ensure AI’s applications are strategically communicated, designed and controlled.