Beware unsubstantiated AI-generated marketing claims | WARC | The Feed
The Feed
Daily effectiveness insights, curated by WARC’s editors.
You didn’t return any results. Please clear your filters.

Beware unsubstantiated AI-generated marketing claims
Arguably, large language model-based chatbots like ChatGPT have more in common with predictive text than the calculator, by seeking answers that look right rather than answers that are right – and this distinction matters.
While the Federal Trade Commission wants to ensure that companies saying they are using AI are actually using AI, there are wider ideas at play here. Namely, what does it mean for the accountability of advertising – especially at a time of heightened scrutiny of green claims, for instance – when text and image generation systems hit the mainstream, adopted by both advertisers and agencies alike.
The message
In a blog post, the FTC warns marketers talking about AI to exercise caution. “False or unsubstantiated claims about a product’s efficacy are our bread and butter”, writes Michael Atleson, an attorney in the advertising practices division. The piece asks a number of important questions that marketers should answer before talking up an AI:
- Are you exaggerating what your AI product can do?
- Are you promising that your AI product does something better than a non-AI product?
- Does the product actually use AI at all?
- Are you aware of the risks?
This final point is really key: “If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test,” the blog says.
Substance and evidence
From a regulator’s standpoint, truthfulness and evidence is vital. In this regard, there are some parallels to be drawn over recent action taken because of companies’ sustainability claims:
- In Australia, Mercer Super – one of the country’s leading pension funds – is being taken to court by the Securities and Investment Commission for false sustainability claims.
- Before that, HSBC was taken to task by the UK advertising standards authority over its de-contextualised net-zero claims.
- At a time of increased scrutiny, the advent of a technology capable of predictive text generation that is a workings-free black box not personally accountable is worth thinking about.
On bullshit
It’s important to consider the wider implications of AI systems, and especially the nature of the content they create, which is – technically speaking – bullshit.
This isn’t swearing for the hell of it: in 1986, American philosopher Harry G. Frankfurt wrote an essay, On Bullshit, that fleshed out the concept of bullshit as distinct from lying, in that a liar must be aware of the truth in order to lie. Bullshit has no such relationship to truth.
For more on generative AI and its relationship to bullshit, there’s a really great discussion on the New York Times’ The Ezra Klein Show, in which the host questions the societal implications of driving the cost of bullshit creation to zero.
Bottom line
Long story short: the age of the professional writer may be dying, but the glorious age of the proofreader has only just begun.
Sourced from the FTC, WARC, New York Times
Email this content