This is according to new research from We Are Social in London, who spoke to groups, individuals, and brands that have experienced upsetting or illegal abuse online. Findings and recommendations from the roundtable discussions can be found in the whitepaper ‘Braving the Backlash’.
Comments critical of the brand, communities, or individuals are allowed to remain, but not comments that are discriminatory, sexual, or threatening.
Creative – some responses present a creative opportunity, such as the brand Honey Maid, which used hateful comments in a follow-up ad.
Humorous – if the brand’s tone of voice allows, it can pay to be funny.
Collective reply – using a single message to respond to many comments.
Template reply – Having pre-prepared answers can be useful when you have large volumes of hateful comment.
Some comments go beyond, but might still be missed by the platform. When reporting a comment, take a screenshot in case it can be used as evidence at a later date.
The overwhelming majority of brands (88.7%), according to a survey of community managers, are silencing hateful comments rather than responding; just 11% respond.
Speaking an event to highlight the report, Susie Hanlon, the chief operating officer of The Mandy Network, a job site for people in acting and entertainment, noted that a failure to respond “almost says that you’re not willing to fight.” Hanlon also observed that guidelines were often written by senior leadership but carried out by some of the youngest and least experienced members of staff.
Social pages are too important for that, the report says: “social pages are your shop floor” and are “representative of the business and its values”. Silence, for the most vulnerable members of a community looks like complicity.
However, Ian Walters, director of marketing for Pride in London, the LGBT organisation, added the need for brands to weigh whether a response might be too strong, whether even trolls (or critics in the grey area) also deserve the “reverse backlash” that could arise from a forceful response.
The most important element that brands need to implement is an anti-hate policy, which needs to be clear, visible, and accessible. In much the way private groups do on Reddit, for instance, the moderation policy is visible at the very top of the page. These need to be clear to both the community and to the employees (or freelancers) tasked with moderating the conversation.
“You can never have a set of rules that are universal,” Walters warned, but brands need to be able to react in time.
Sourced from We Are Social, WARC