For marketers, a deeper understanding of the psychology of misinformation could be valuable for stopping its spread, in turn minimising consumer vulnerability and maximising the impact of ad spend, argues Emma Lacey, SVP EMEA, Zefr.

Misinformation has long been an issue for online advertising. And with the rapid growth of generative AI sparking concerns around a mass influx of misleading content and fake websites, marketers will need to be more careful than ever when deciding where their ad spend is headed.

Before tackling a problem, you must first understand it. A wide range of scientific studies have been conducted into how our brains respond to deceptive, untruthful information.

What risks does misinformation pose for marketers?

Being linked with misleading information can be a huge problem for marketers and their brands, no matter how big or small. Seeming to endorse false messaging can lead to unfavourable headlines, as Nike, Amazon and Ted Baker found out when their ads appeared next to COVID-19 conspiracy theories during the pandemic.

Appearing to associate, even accidentally, with false content can lead to significant reputational damage, with 63% of consumers agreeing that misinformation has a negative impact on their perception of brands. This reputational damage can also impact a brands bottom line, with 50% of consumers admitting they are less likely to purchase from a brand that appears to support misinformation.

With serious money on the line as a result of unsafe ad placements, it is no wonder that incidents like these have served as a catalyst for brands to take online safety more seriously in recent years, with 67% of digital advertisers prioritising brand safety last year.

What can marketers learn from the science behind misinformation?

Why are we susceptible to misinformation? Studies have pointed towards three key factors: cognitive bias, the illusory truth effect and motivated reasoning. Cognitive bias – the propensity for the brain to filter events according to personal experiences – encourages us to believe information that aligns with our beliefs and attitudes, regardless of accuracy. This often leads to logic and reasoning being overridden in favour of what we believe to be true.

This is compounded by the illusory truth effect. This phenomenon means our brains are much more likely to believe false information following repeated exposure to it – which is almost guaranteed to happen across the online sphere and social media. As 33% of the UK public admit to using social media as their news source, the illusory truth effect is much more likely to increase both the spreading and the belief in misinformation.

Defined by the American Psychological Association as the “reasoning toward a desired conclusion rather than an accurate one”, motivated reasoning is when people’s rational thinking faculties lead them to believe content before determining whether it’s actually truthful.

Marketing is built upon beliefs, perceptions, and behaviours. The industry needs to employ an insight-led approach to reduce the impact of misinformation; with advertisers understanding not only how to grab consumers' attention, but also why they might be attracted to harmful content.

What steps can marketers take to combat the problem of misinformation?

Taking the psychology behind consumer vulnerability to fake news into account, here are some actionable steps for marketers to take:

1. Try to get ahead of misinformation

The issue of misinformation is persistent, but marketers can get ahead of the curve by taking the right precautionary measures. These can include risk assessments and scenario planning to ensure there is an effective procedure in place in the event of becoming associated with misinformation. This will allow brands and marketers to act quickly and efficiently, potentially reducing – if not preventing entirely – any damage to their wider reputation and online presence.

2. Stop using block lists to determine ad placement

Brands can find themselves in deep water by appearing next to untruthful, potentially dangerous content, so it is understandable that most adopt a cautious stance when it comes to ad placements.

However, legacy brand solutions such as keyword block lists, are simply too blunt to be effective in today’s video-rich online environments. Not only can they mean that potentially unsuitable ad placements slipping through, but they can also lead to over-blocking of suitable environments.

3. Embrace AI-driven solutions that meet industry standards

While the rise of AI could lead to an influx of potentially unsafe content, it can also be harnessed to tackle misinformation. Sophisticated, AI-enabled brand safety and suitability solutions, supported by nuanced, human intervention, can more effectively spot misinformation. This not only ensures that advertising avoids risky placements, but also demonetises these environments, helping to stop their spread.

In order to create a safer online environment for all, it is important that these solutions adhere to industry-accepted brand safety definitions, such as those laid out by the Global Alliance for Responsible Media (GARM). By creating shared definitions, brands and consumers can more easily identify and categorise unsafe environments, therefore working together towards a safer internet.

While misinformation is unlikely to ever be fully eradicated, marketers can still play a key role in tackling its spread, and making online environments better places for both brands and consumers. Understanding how misinformation works is only half the battle – marketers must also prioritise nuanced approaches to brand safety and suitability, in line with industry standards. Only with all players in the ecosystem pulling in the same direction, can the spread of misinformation be greatly reduced.