The amount of harmful content removed from Instagram has risen, according to Facebook's latest Community Standards Enforcement Report. In Q3 2019, 3.3 million pieces of Instagram content were removed or covered with a warning. This is an increase of 11.2% from the previous quarter.
Overall, 1.6m pieces of content were removed from the platform for attempting to sell drugs and firearms. This is the largest group, followed by suicide and self-injury (0.8m), child nudity and sexual exploitation (0.8m) and terrorist propaganda (0.1m).
The level of harmful content on Instagram that is reported by Facebook is below the actual level, as the company only shares data on four of the nine categories of harmful content it monitors.
Nine-tenths (90.8%) of this content was removed before any users saw it, a level that is flat from Q2 2019. Instagram performs worse on suicide and self-injury content, as one-fifth (20.9%) of content removed or covered with a warning is reported by users first.
Despite making progress, Facebook is still facing significant pressure about brand safety. Marketers are particularly concerned as non-safe ads can damage consumer trust and sales. Indeed, WFA chief executive Stephan Loerke has said safety should be a top brand priority. Facebook is currently helping brands understand their own safety tolerances and what they consider to be unsuitable.