Individual executives at social media companies could be held personally liable for harmful content distributed on their platforms, according to plans drawn up by the British government.

A long-awaited white paper is expected to be published on Monday, with proposals for a new “duty of care” to be policed by an independent regulator that is likely to be funded by a levy on media companies.

First reported by the Guardian, which obtained a leaked government report at the end of last week, it is understood that the regulator – most likely to be Ofcom until a new body is established – will be given powers to hand out substantial fines against companies and hold executives personally liable for breaches.

According to the newspaper, the scope of the recommendations is broad because the enhanced legislation will extend to online messaging services and file-hosting sites in addition to the likes of Facebook and Google.

The leaked document makes clear that ministers believe the days of self-regulation are over and that it is time to set clear standards, backed up by enforcement powers.

Social media companies will be expected to sign up to a code of practice, explaining what steps they are taking to ensure they meet the duty of care, to co-operate with the police and other law enforcement agencies and to produce annual transparency reports, disclosing the prevalence of harmful content on their platforms.

The government has come under increasing pressure to regulate the internet, especially since it was revealed last month that the far-right gunman who shot dead 50 people in Christchurch, New Zealand, livestreamed the atrocity on Facebook, with copies of the footage then shared on YouTube and Twitter.

The suicide of 14-year-old Molly Russell in 2017 is also reported to have had a “strong impact” on the white paper, after it was revealed she viewed content linked to self-harm on Instagram before taking her own life.

Meanwhile, just as the Guardian reported on the government’s tough new approach, a separate BBC investigation found that Tik Tok, the video-sharing app popular among teenagers and children, has failed to suspend the accounts of people sending them sexual messages.

While the company deleted the majority of these comments when they were reported, most users who posted them were able to remain on the platform, the BBC claimed.

TikTok said that child protection is an “industry wide-challenge” and that promoting a “safe and positive app environment” remains the company’s top priority.

Sourced from Guardian, BBC; additional content by WARC staff