Fake news is polluting our public life but responsibility for solving the problem doesn’t lie with just one party. Stevan Randjelovic, Brand Safety Manager, GroupM EMEA explains.

Wouldn’t it be great if there was a silver bullet – one easy person to blame for fake news? That would make solving the problem so much easier, but sadly there is none.

Not that this puts advertising in the clear, because it can be a valuable source of income for those involved in the dark arts of fake news, even if consumers don’t seem to blame brands for what’s in their feeds.

According to Eurobarmeter research, the European public put the responsibility on journalists (45%), governments (39%), press and TV management (36%), citizens (32%) and social platforms (26%). Indeed, many of these players are taking action to try and solve their part of the problem. More on this later.

What makes fighting online disinformation difficult is the fact that there’s no widely-accepted definition of it. The European Commission says it consists of verifiably false or misleading information that is created, presented and disseminated for economic gain, to intentionally deceive the public, and possibly to cause public harm. Disinformation is often spread for political gain, including influencing controversial issues such as abortion, immigration or, dare I say it, Brexit. However, this is one definition among many.

The truth is that despite its other cultural and social merits, the digital media marketplace has evolved into a near-perfect environment for distorted and false news to thrive. But who is acting against it?

Advertisers are trying to make a change

For one, the advertising industry is taking action. The World Federation of Advertisers’ Global Media Charter states that advertisers commit to supporting their partners in the avoidance of the funding of actors seeking to influence division or seeking to inflict reputational harm on business or society and politics at large.

Leading independent verification service providers including Integral Ad Science, DoubleVerify and MOAT have devised technologies that use keyword tracking or external partnerships with fact-checking and journalistic organisations to identify disinformation. However, there remain issues with these approaches, notably around linguistic nuances that technology cannot always recognise.

A key part of the fake news problem lies with the people who consume it. People tend to be ‘confirmation-biased’, which means that we actively seek out and assign more weight to evidence and information that confirms our beliefs. An MIT study on Twitter found that disinformation reached more people and spread six times faster than factual stories. It was seen as more interesting than real news, and one in four retweeted even if they were aware it was fake.

Governments have sought different solutions. Germany now requires social platforms to remove hate speech, disinformation and illegal material within 24 hours or face fines. France has adopted a specific law addressing disinformation during elections. And Italy has introduced an online service for reporting false articles to its cybercrime police force.

Europe’s citizens think that journalists should also play a key part in stopping fake news and we can find action there too. One of the top French dailies, Le Monde, set up a fact-checking unit called Les Decodeurs which has devised a web extension called Decodex to identify fake news sites. Similar activities include NewsGuard, which has 25 people assessing more 2,000 websites against nine criteria for credibility and transparency.

Many consumers (and many in the media and advertising industries) believe social platforms as have a responsibility to take significant action.

Google has included specific provisions about misleading and fake content in their ad products’ policies and has enforced them – two million pages are blocked each month. They’ve also launched a Fact-Check label in Google News and Search, in addition to launching its Google News Initiative.

Facebook has worked on removing fake accounts and detecting and prohibiting accounts associated with coordinated inauthentic behaviour. It has partnered with third-party fact-checkers to reduce the distribution of misleading content and have prioritised trusted sources in its feed. It has also introduced new transparency rules for political advertising.

But a more joined-up approach is needed

These are all constructive actions. The real concern is not a lack of action, but that it’s rarely coordinated or based on a holistic approach.

Arguably, advertising does have its share of the blame in this issue, as it can create a financial incentive to engage as dissemination of fake news. The infamous example of North Macedonian teenagers producing fake news stories during the US 2016 presidential elections to attract ad spend proves that. The industry has been addressing this challenge, but fake news is a multifaceted problem and the responsibility is shared. As we have seen, people tend to share fake news knowingly, or they do not know how to recognise it and there is no single definition of the phenomenon.

Addressing fake news involves government legislation, creating greater public awareness, encouraging different behaviour by citizens and actively disincentivising such practices. This is a challenging task and may be inhibited by low media literacy, hijacked algorithms and sometimes strong political pressure.

The advertising industry must continue to play a positive role in removing the financial incentives for creators and disseminators of misleading information – and be part of a coordinated effort to tackle disinformation and fake news. Unless every player coordinates their approach and activity, fake news will continue to be a threat to our society.