Barely a month into 2023 and AI is – after years of promise – the breakout technology of the year; analysts are now looking across industries for signs of disruption as OpenAI’s ChatGPT steals the thunder of much larger players like Google, whose dominance in search now appears under threat, but as the technology hits the mainstream ethical and commercial questions abound.

Why it matters

Have you tried it? Not only ChatGPT but OpenAI’s image-generation AI, Dall-E, are breathtaking in their capacity and potential. 

The young company’s central place in the AI conversation has left Google in an awkward position, given that its own generative AI, PaLM (Pathways Language Model), is said to be three times larger than OpenAI’s GPT. Many are wondering when Google will pull the trigger as an upstart eyes its core business.

Major companies are now debuting their own models and interfaces. This week, Reuters was told by sources close to the action that Chinese search giant Baidu will debut an AI chatbot service as a standalone product and with the potential to integrate the technology into search.

The problem is that Google has a lot more to lose and puts itself at much greater risk, not only of reputational damage at a time when politicians and regulators around the world are breathing down its neck, but of hitting the very core of its biggest moneymaker: Google search. But it’s not the only company juggling the complexities and opportunities of this new and incredible technology.

The difference between thinking and AI

Magicians practice sleights of hand, not sorcery. In a similar way, it’s important to remember that ChatCPT and other large language models aren’t actually thinking and are instead providing probabilistic responses to prompts. This doesn’t mean they can’t be good, but it’s vital to remember that these systems aren’t calculators. 

“[I]t is trained to produce plausible text,” explains Princeton computer science professor Arvind Narayan in an interview with The Markup. “It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.”

Good-looking answers are not necessarily the right answers, effectively. Speaking to the FT recently, Rebecca Finlay, CEO of research institute Partnership on AI, explained that people often “over-trust the predictions that come out of AI systems and models” in professional settings. It’s partly because they look like incredibly professional answers.

As a result, the question of reliability grows knottier once people are paying for a service, which despite disclaimers, carries a certain burden of expectation.

OpenAI steps into serious business

Last week, reports indicated that OpenAI will introduce a premium version of ChatGPT for $42 every month. While the company hasn’t confirmed ChatGPT Professional, a link to a waitlist went up on OpenAI’s official Discord server which surveyed prospective customers on pricing and usefulness. For the monthly fee, professional subscribers would likely receive a faster service – even during busy times when free users experience “throttling”, and increased message limits.

It marks an initial commercialisation of a platform that has catapulted the technology into the mainstream with an open demo. But this also brings the company into new and more complicated territory of the sort that Google is wary of treading.

Early complications can be seen in the publishers that have integrated similar technologies or are (as in the case of Buzzfeed) planning to integrate these technologies. CNET, a tech site, made headlines when it began publishing some articles – mostly financial explainers – using an AI language model. Most went unnoticed, but a few sparked accusations that parts of these pieces approached plagiarism in instances where other material had been lightly rephrased.

Writing about the experience, the publication’s editor-in-chief Connie Guglielmo explained that the AI – not ChatGPT – was intended to give its existing writers more time to “test, evaluate, research and report in their areas of expertise”. Upon finding factual errors or evidence of plagiarism, the publication has now started a full audit of the published material, she added.

The cost of AI

Google is rightly concerned about the profound societal impact that such generative technologies could have – from fake news to deep fakes, to nefarious applications we haven’t even thought of yet – but the other elephant in the room is cost.

Morgan Stanley’s analysts, quoted in the FT, reckon that using NLP for a search query would cost roughly seven times as much as an internet search does now. It will be fascinating to see how Microsoft squares this circle now that it has announced it will begin integration OpenAI technology into its Bing search engine.

Further evidence of this comes from Meta, a company on the less-visible fringes of this conversation, whose recent cost increases were largely put down to its metaverse-shaped profligacy. 

However, a deeper reading finds a company ploughing money into the data centres that will allow it to power the AI-backed tools it hopes will solve its recent data availability and recommendation engine problems. According to the Wall Street Journal, these current computing capabilities are being pushed to the limit, with engineers in other areas asked to try to lighten the computing loads of the applications they are working on.

What it means for Google

Though there is a certain accelerationist strain online made up of people who think Google should just hurry up and release the thing, a subtler reading suggests that it is right to pause and take stock. There are several core reasons:

  1. Reputational damage: Google is already under pressure for its ad tech unit and the sheer dominance of its search advertising. Launching an AI of Promethean potential might not be in the best interests of the business, given its imperfections and the sheer controversy of the technology in question.
  2. Data ownership is far from settled: It will be worth watching Getty’s lawsuit against the maker of Stable Diffusion very closely. It’s one thing to train an AI on widely available datasets under ‘fair use’ rules, but it’s quite another to commercialise the results. It’s sensible to hear someone else have to answer these difficult questions in public.
  3. Cost: If natural language processing is seven times as expensive to run as a normal search – and far harder to advertise against – then Google is considering a very different business, potentially as an extension of its subscriptions or as a B2B platform for other providers to use its technology. Ultimately, this isn’t just a cool gadget but a fundamental new understanding of itself – should it become commercially important.
  4. Controversy: There is already scepticism, but it’s possible that the reaction to AI-generated content stops being one of surprise and wonder but one of anger, because of the perception of cheap cost-cutting. Some potential customers shown generative AI for content or for advertising may come away less amazed than aware that the cheapest possible message is all they’re worth to this brand.

It’s probable that this will go in a totally different direction, but these are big and complicated ethical problems sat squarely between the sciences and the humanities. Understanding the implications will be crucial.