With the right algorithms and training data, machine learning can deliver a more accurate and comprehensive understanding of attention across all touchpoints.

Attention has become a paramount concern for advertisers and agencies, rendering traditional metrics such as reach and frequency outdated in their standard forms. As a result, companies like Amplified Intelligence, Adelaide and Lumen have emerged, offering better attention measurement and optimisation solutions. In utilising attention, their objective is to transform reach into a new performance metric that can drive more growth for brands.

The benefits for brands are evident: increased impact, improved quality inventory utilisation and reduced carbon footprint resulting from countless unseen impressions. However, despite the advancements in attention measurement tools, the industry is still lacking a standardised approach across all channels. 

Currently, eye tracking is one of the most popular methods for measuring attention. However, it faces challenges when it comes to benchmarking across different devices and contexts. For instance, how does attention differ when combined with second screen viewing? How does attention vary between train commuters and those at home? Ultimately how do we curate natural situations while utilising eye tracking technology? We need to overcome these challenges if we are to truly achieving a level playing field in attention measurement. 

When AI and attention come together

Artificial Intelligence (AI) has entered into a renaissance of its own. The great computational chip shortage of the last few years is over, as computing power that was previously reserved for industry giants like Microsoft and Google is now slightly more accessible in the form of specialised graphics processing units (GPUs). In fact, OpenAI has built its data centre using graphics cards connected to form a cluster of computing power.

While AI has already surpassed the Turing Test, a benchmark for machines to exhibit human-like behaviour, its potential extends far beyond this accomplishment. Instead of striving to measure real-world behaviour using techniques such as eye tracking, an endeavour which often proves challenging and inconveniences participants, why not simulate the real world itself?

Neural networks, specifically using computer vision, holds promise as a potential solution to the attention problem. By training neural networks to analyse societal behaviour and visual stimuli (computer vision), we can gain deeper insights into consumer attention and engagement.

Such neural networks could process and interpret vast amounts of visual information relying on heuristics, while enabling us to understand attention patterns across different contexts, devices and media channels. Imagine virtual living rooms generated in an instant with creative in situ and rapid results delivered based on neural cognition. Such simulations could quickly exceed the number of executions greater than any human based study.

With the right algorithms and training data, machine learning can deliver a more accurate and comprehensive understanding of attention across all touchpoints solus or multi consumption. 

From potential to (virtual) reality

Neural network-based social simulations and computer vision have the potential to become reality quickly. Stanford University researchers recently employed ChatGPT to successfully simulate a small society, tracking the flow of information among virtual subjects and their resulting actions. This technology could be harnessed to understand how consumers interact with media. At the same time companies such as IBM have been experimenting with symbolic AI, a field of artificial intelligence that mimics how humans learn and code memories.

Automated recognition technology has been used before, particularly in sports sponsorship to recognise the number of times a logo on a football shirt or an F1 car might appear on screen. Neural networks allow advertisers to go one step further and simulate the impact of cluttered environments and how this might impact recall.

Data from previous eye tracking studies would allow the AI to mimic how content is viewed, the impact of colours, movement and how consumers read or respond to content. This means that in practice an agency or tech provider could place ads across different advertising units in situ analysing the performance and benchmarking to get closer to lived experience over thousands of iterations.

The applications of AI in solving the attention problem are vast. Advertisers and agencies will soon be able to leverage AI simulations to optimise their campaigns, ensuring that their messages receive the desired level of attention and impact. By combining computer vision with AI, we can measure attention in a standardised manner, eliminating discrepancies caused by varying devices and contexts. This standardised approach will pave the way for a single reliable attention currency. 

In summary, while traditional metrics are becoming obsolete, companies are stepping up to deliver better attention measurement and optimisation tools. The next phase will focus on how we can simulate and understand real-world consumer behaviour, providing valuable insights for advertisers. AI has the potential to transform how attention is measured, allowing for more impactful campaigns and a better overall advertising experience. As we enter this new era of AI-driven attention measurement, the future looks promising for advertisers, agencies and consumers alike.