Twitch finally reveals safeguarding measures, allaying brand fears | WARC | The Feed
The Feed
Daily effectiveness insights, curated by WARC’s editors.
You didn’t return any results. Please clear your filters.

Twitch finally reveals safeguarding measures, allaying brand fears
Twitch, the Amazon-owned streaming platform, has, for the first time, published a report detailing its efforts to protect the millions of people who visit the platform each day.
Context
The platform, which mainly focuses on video game live streaming, has seen explosive growth – a 40% rise in channels in 2020, and it speaks to a valuable audience for brands. But it has also attracted criticism over efforts to stamp out hateful conduct, sexual harassment, and predators. A 2019 study found almost half of Twitch users surveyed had faced some kind of harassment.
The details
- The company’s Transparency Report says its AI-powered AutoMod tool, which blocks inappropriate content, or moderators, looked at more than 95% of platform content during the second half of 2020.
- Manual deletion of messages by creators and moderators was up 98% relative to the first half of the year, which the company attributes to the 40% increase in channels between the two halves of the year.
- Total enforcements were up 41% during the year, dealing with a wide range of categories including hateful conduct, sexual harassment, violence and gore, nudity, and even terrorist propaganda, which Twitch says is very rare.
The challenge
“Because content is viewed as it is created, live-streaming provides a particularly challenging environment for machine detection to keep up. Nevertheless, we have found ways to use machine detection to bolster proactive moderation on Twitch, and we will continue to invest in these technologies to improve them” – Twitch Transparency Report 2020.
Sourced from Twitch
Email this content