GLOBAL: Tech giant Google and its Jigsaw subsidiary last week launched a new machine-learning tool to help online platforms, particularly news organisations, clean up abusive comments posted on their sites.
The new software, known as Perspective, has been tested at the New York Times where previously an entire team had to sift and moderate 11,000 comments a day.
"We've worked together to train models that allows Times moderators to sort through comments more quickly, and we'll work with them to enable comments on more articles every day," Jared Cohen, President of Jigsaw, wrote in a blog post.
Citing a recent study from the Data & Society Research Institute, he said online harassment has affected the lives of roughly 140m people in the US, and many more elsewhere, and it also wastes time and money for content providers.
"News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether," Jared explained.
He said that Perspective works by reviewing comments and scoring them based on how similar they are to comments that people said were "toxic" or likely to make them leave a conversation.
Its self-learning capability means that "each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments".
Jared added that Perspective will be made available to international media organisations that are partners in Google’s Digital News Initiative, which includes such well-known names as the BBC, the Financial Times and the Economist, among many others.
And over the coming year Google and Jigsaw aim to develop new models that work in languages other than English as well as ones that can identify other perspectives, such as when comments are "unsubstantial or off-topic".
Data sourced from Google; additional data by Warc staff