The Google-owned video site, which like several other tech giants has come under international pressure to do more to combat inappropriate content, released these figures last week in its latest Transparency Report.
Covering July to September 2018, the report revealed that 81% of the 7.8 million removed videos were first detected by machines and 74.5% of these machine-detected videos had never received a single view.
Almost three-quarters (72.2%) of them were considered to be spam or misleading, 10.2% were removed out of concern for child safety, while 9.9% included nudity or sexual content.
And of the most shocking videos uploaded in September, such as violent extremism and child safety, YouTube reported that more than 90% of them had fewer than ten views, which it suggested was a sign that its fight against this type of content “is having an impact”.
“We’ve always used a mix of human reviewers and technology to address violative content on our platform, and in 2017 we started applying more advanced machine learning technology to flag content for review by our teams,” the company said in a statement.
For the first time, YouTube’s latest quarterly report also included information about the channels it took down. The company said 79.6% were removed because of spam, misleading content and scams, 12.6% contained nudity or sexual content, while 4.5% were pulled for child safety reasons.
“We terminate entire channels if they are dedicated to posting content prohibited by our community guidelines or contain a single egregious violation, like child sexual exploitation,” YouTube said.
It added that 224 million comments were removed for violating its community guidelines, mostly for spam, but that they represented “a fraction” of the billions of comments posted on YouTube each quarter.
Sourced from YouTube; additional content by WARC staff