Marc Guldimann, founder of Adelaide Metrics, explores the ways attention metrics can inadvertently be misaligned with a marketer’s true intention, looking at the issue through the lens of Goodhart’s Law.
A freshly-minted MBA is managing his first sales team at the Lemon Light Bulb factory. After reviewing weekly metrics, he notices a trend: top bulb sellers send more emails than their colleagues who are struggling to sell; in fact, there is a strong correlation between emails sent and sales output. Eager to put his training in incentives to work, our new MBA restructures the bonus program so that a quarter of total compensation is tied to the number of emails sent.
In the weeks that follow, email volume skyrockets, but sales plummet. Low-quality emails flood clients’ inboxes, and eventually, LemonLightbulbs.com ends up on lists of known spammers, sabotaging communication avenues for even top sellers. Now, the count of emails sent is no longer a valuable metric. This is a quintessential example of Goodhart’s law: when a measure becomes a target or a KPI, it loses its value as a useful measure.
Goodhart’s law and attention metrics
This same story is playing out with attention metrics in digital advertising. Here’s how and why it’s happening and what we can do to move forward.
Around 2015, researchers and vendors, faced with the near-complete gaming of viewability, began working on technology to measure attention paid to advertising and the resulting impact. Unsurprisingly, campaigns that captured more attention also drove better business outcomes. Subsequent studies found consistent results – longer durations of attention on individual impressions – were strongly correlated to increased impact throughout the marketing lifecycle, from awareness to sales.
So, when it came time to create measures, many vendors and agencies gravitated towards duration-based attention metrics. DBAMs, for short, measure or predict attention in seconds.
Just like email volume at Lemon, seconds of attention seemed like a logical choice – if longer duration of attention correlates to better outcomes, one should clearly measure media quality based on duration of attention. And just as it was with email volume, once DBAMs were deployed as a KPI, things stopped working.
Collateral optimization and the attentive audience paradox
As users of DBAMs quickly discovered, there are several ways to increase duration of attention that aren’t always in the best interest of advertisers.
When advertisers optimize impressions to increase the duration of attention, media isn’t the only input affected – creative and audience levers are also pulled, often inadvertently.
Optimizing creative to maximize attention can create bias for more salacious or “brain-tickling” content, which might not drive the most impact. Luckily, most advertisers don’t use dynamic creative, so the risk to campaign performance from optimizing creative to attention on the fly is low.
Audiences, selected on the impression, are a much more treacherous surface for DBAMs. If every impression is optimized towards the audience that will pay attention the longest, several unintended, and potentially unappealing, biases are introduced. For instance:
- People over 25 spend 70% longer viewing ads in the feed than their younger counterparts, according to research from Meta.
- The amount of attention to TV ads increases with every incremental exposure up to 25, according to data from TVision.
- Intoxicated people take 7–33% longer to read content, according to often-cited research from Sobell and Maylor.
This means that if you target the impressions with the longest duration of attention, you are likely to create bias, in this case, towards an older, intoxicated and over-frequencied audience.
Last year, we coined the term The Attentive Audience Paradox to describe this phenomenon in an article on WARC. Its effects are apparent in recent field studies, including a 2022 PwC audit of a popular DBAM, which found that viewability proved more effective than “high attention” across many metrics, most notably in terms of spontaneous attention across different marketing programs. (The advertising being studied was from Tesco.) While attention had its moments in that research, viewability overall seemed to drive more efficient brand outcomes.
No currency for bad metrics
Thanks to increased precision and granularity, attention metrics have the potential to replace viewability as the dominant media quality currency. This transition is particularly compelling since such currencies sculpt incentives for sellers, and the current viewability-driven set has led to a glut of low-quality supply.
Unfortunately, DBAMs aren’t suitable as media quality currencies. Research and common sense indicate that the duration of attention to ads is heavily influenced by how interesting and relevant the creative is.
Even if media sellers were willing to adjust guarantees based on demographic targeting (e.g., charging more to reach a younger audience on a per second basis), they would never agree to swap viewability for a DBAM given the advertiser’s control over creative.
Said another way, smart publishers won’t guarantee their audiences will pay attention to a crappy ad.
The path forward: The probability of attention
To create an attention metric that sellers can confidently guarantee, the impacts of audience and creative need to be extracted. Conveniently, this also mitigates the effects of The Attentive Audience Paradox.
Attention to advertising is ephemeral. It waxes and wanes over the arc of an impression, with media placements creating an opportunity for attention that creative captures and holds for as long as the audience likes what they are seeing. Thus, measures of placement quality should focus on the probability of capturing attention rather than for how long.
Most importantly, by using attention data to make better media investment decisions rather than as a campaign KPI, media buyers can tune algorithms to drive the best outcomes and avoid the curse of Goodhart’s Law.