The topic of content moderation has become an increasing priority for social media companies and those that do business with them. In light of the Christchurch atrocity, during which Facebook did not immediately remove a livestream of the incident, some firms in China show what close moderation looks like.

A report in the South China Morning Post, which was given access to the content moderation team of one of China’s largest livestreaming companies, Inke, shows what strict moderation takes.

The stakes for the company are high. Live streaming has exploded in China, where the form takes in sing-along videos and commercial demonstrations. Last year, almost 400 million Chinese internet users live-streamed to a platform.

Inke dedicates 60% of its workforce (well over 1,000 people) to content moderation. It must make sure that livestreamed content abides by guidelines stipulated by the China Association of Performing Arts, which are updated weekly to adapt to what the country believes is unacceptable online content.

By Chinese standards, Inke’s staffing is small compared to ByteDance, which owns TikTok and Toutiao. Last year the company had to take down a whole app and issue a public apology after “vulgar content” appeared. It then increased its vetting staff from 6,000 to 10,000.

Most of the guidelines are relatively obvious: politically subversive speech, is of course banned, as are sex acts, or any violence to others or to self. The company uses AI systems to flag content, but it often runs up against the “bikini problem”. It can work out when a bikini is shown, but can’t marry that to the context (pool party or bedroom), which humans understand far better. The issue is, as ever, one of scale.

Ultimately, the company is able to pay its moderators far less than an American moderator would need to be paid, with the Inke moderators’ starting salary US$3 an hour. Turnover is ridiculously high, with many people leaving before the end of their first month. Most of the content is, unsurprisingly, mediocre.

The topic has gained a high profile across the world following the Christchurch atrocity, in which 50 people were murdered by a white supremacist who broadcast the incident on Facebook Live. The government of Australia has passed a law stipulating harsh punishments, including custodial sentences, for executives of social media that do not quickly remove such violent posts. This week, proposals from the UK government also considered personal liability for executives.

Sourced from South China Morning Post; additional content by WARC staff