Announcing the progress in a blog post, YouTube, who came in for severe criticism after a number of brands found that their ads were appearing next to extremist content, has significantly improved the speed, accuracy, and scale of its moderation.
“Our machine learning systems are faster and more effective than ever before,” the company wrote. “Over 75 percent of the videos we've removed for violent extremism over the past month were taken down before receiving a single human flag.”
Though it said its system is not yet perfect, the accuracy of the process has improved dramatically. “In many cases our systems have proved more accurate than humans at flagging videos that need to be removed.”
Crucially, the machine learning system that it has deployed is moving at the scale of YouTube’s worldwide platform, which, it says, has over 400 hours of video uploaded every minute, which provides the platform a significant challenge.
“Over the past month, our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we’ve taken this kind of content down.”
The announcement coincides with the UK Home Secretary, Amber Rudd’s visit to Silicon Valley to discuss anti-terrorism measures, as part of the Global Internet Forum to Counter Terrorism, an organisation set up by Facebook, Microsoft, Twitter and YouTube last month.
She is expected to tell companies that extremists must not be allowed to upload any content at all.
The forum stated in a press release, that its mission is “to substantially disrupt terrorists’ ability to use the internet in furthering their causes, while also respecting human rights.”
YouTube says that, alongside the forum, it is bringing in more human expertise to enhance its knowledge to help the platform “better identify content that is being used to radicalize and recruit extremists.”
Data sourced from YouTube, Facebook, BBC, WARC; additional content by WARC staff