The recent fallout over brand safety is yet another sign to the industry that we need to do more to establish and guarantee advertising only appears in brand safe environments, that the advertiser actually wants to appear in.
The fact that this keeps happening, time and time again, suggests that the technology is not available, or that these failings are inevitable. But this could not be farther from the truth.
It is possible to take a variety of steps pre-bid to ensure that advertising does not appear in harmful environments. Suspicious IP addresses, app bundle IDs, inappropriate web page names – these are all strong indicators of poor environments or non-human traffic, and should be blocked from systems before an ad can be shown. After all prevention is better than cure, better to have a publisher complain that they can’t display adverts and have that manually checked and approved than it is to potentially place ads in non-brand safe environments.
Another essential step is to implement key-word content blocking. A blacklist is created of words which branded content should never be featured near, these can range from racist slurs and bad language to news events. While not offensive or illegal, brands understandably do not want to feature advertising next to coverage of deaths or tragedies, and keyword blocking is a powerful tool to ensure this does not happen. Keyword blocking can also identify words used in the descriptions of videos.
Artificial intelligence can also be used to optimise traffic towards environments with the very lowest risk of fraud, or content which is not brand safe. This adds another level of automation and the ‘human’ intelligence of an ad ops professional, who might smell a rat when confronted with a combination of suspicious factors – but across millions of ad requests each day. AI prevents ads from being shown when there is even a slight chance of a non-brand safe placement.
These steps might seem deceptively simple, but they are effective. Last year the hugely sophisticated ‘methbots’ operation was detected by White Ops and found to impersonate established sites and fabricate inventory, faking mouse movements and social network logins. By using the fraud detection techniques above, only 0.00064% of impressions were found to have delivered against non-human traffic.
Not all companies have the technology or scope to deliver the technology above – although if they work in the ad-tech industry they should – but there are a number of extremely effective third parties who can lease technology and monitor campaigns, Moat, Integral Ad Science, Forensiq, Double Verify, to name just a few, ensuring all ads are delivered in brand safe environments.
The final step is a decidedly low-tech solution – people. It is vital to employ staff to continually monitor and check for brand safety risks and fraud. For example, the controversy around Facebook’s initial decision to block a photo of a naked child in the Vietnam war. Facebook’s technology could not establish the appropriateness of the photo, it took a human’s sensibilities. Tech providers must maintain a fully staffed team to continually monitor and assess the data delivered by artificial intelligence, internal systems and third parties.
The new TAG program, started in the US and due to roll out across the industry this year, should make a huge difference when it comes to brand safety as long as the entire industry – from giants like YouTube to the latest start-up commit. But for now it is time to stop stalling and put brand safety first across the board, our reputation as an industry depends on it.