False behaviours and identities online can undermine the whole system of digital advertising that brands currently rely on. As such, taking a stand on disinformation is not just a moral imperative, but is also business critical, says Chloe Colliver, head of digital policy and strategy at the Institute for Strategic Dialogue (ISD).
Conscious media investment
This article is part of a series of articles from the WARC Guide to conscious media investment.
In December 2020, in the wake of a fraught Presidential Election in the US, an ISD investigation exposed a series of Facebook groups exhibiting a new form of information laundering. Disinformation and hyper-partisan media was posted by networks of false or managed accounts to an audience of over 27 million users on the platform. The violent attack on the Capitol just a month later demonstrated with tragic consequences the real-world harm that online disinformation can fuel or exacerbate.
Figure 1: Visual of the function of the network(s) identified by ISD’s December 2020 investigation, Spin Cycle.
This was not the first deceptive network of such scale that ISD had identified on the platform. An investigation in June 2020 found an extensive web of coordinated Facebook assets connected to NaturalNews, a US-based commercial enterprise promoting conspiracy theories about health, race and politics. Later that year, in partnership with the German Marshall Fund, ISD analysed a network of thousands of Twitter accounts acting in seeming coordination to spread pro-Chinese Communist Party disinformation about COVID-19.
For advertisers, while the content spread by these networks of fake identities should be a concern in itself, their presence also serves to distort any understanding of online audiences. Since July 2018, Facebook has admitted removing 78 such large-scale networks from its platform. ISD found that it had made over $23 million in advertising dollars from these false identities’ activities. What we do not know is how many legitimate advertisers’ dollars were lost in parallel by having their ads shown to fake account networks rather than real people.
Facebook has stated removing over 3.2 billion fake accounts from April to September 2019. In comparison, there were an estimated 1.6 billion daily active users worldwide in the same period. This suggests that the proportion of false eyeballs on advertising content could be staggeringly high. Technology companies’ inability to get a handle on false content can be a genuine threat to public health or safety. But false behaviours and identities online can also undermine the whole system of digital advertising that brands currently rely on for their practices. As such, taking a stand on disinformation relates not only to democratic safeguarding, but also business interests.
Brands would benefit from a world in which technology platforms are mandated to act against the deceptive content, accounts and behaviours their services currently enable. Advertisers also have the leverage to make that argument in a way that civil society voices cannot. They must call on social media companies to enforce their terms of service against inauthentic identities in order to win back the trust of brands and the wider public.
Read more articles from the WARC Guide to conscious media investment.
Advertising, but at what cost?
Hate speech online has real-world consequences
Planning for inclusion: A media case for conscious advertising
Christopher Kenna, Chris Ladd, Francesca Leronni, Martin Radford
Telling stories about advertising and human rights
Dr Pia Oberoi
Unpacking the tensions in media: Ensuring advertising supports inclusion
Isabel Massey and Jerry Daykin
Can responsible media investing be operationalized?
Krystal Olivieri and Belinda J. Smith
How becoming more diverse, inclusive and responsible is helping The Sun remain relevant
Dominic Carter and Shelley Bishton