Ofcom’s in, but the jury’s out as to whether it can tackle harmful content online by itself

“State censorship of the internet”? “Shameful” proposals that are “long overdue”? Or the means to “tackle new online threats as they emerge”. A UK government announcement of plans to grant new powers to Ofcom designed to make the internet a safer place has provoked debate among journalists and politicians.

Extending Ofcom’s regulatory power beyond mainstream media and telecommunications, to also cover social media platforms or websites hosting user-generated content, is the first government response to its 2019 Online Harms consultation.

Unlike newspapers and broadcasters, social media companies have so far enjoyed comparative freedom to self-regulate what content is published on their platforms. That’s not to say the likes of Facebook, Twitter, YouTube and SnapChat have escaped scrutiny but, to date, there’s been little to no statutory regulation that applied to them.

The issue was thrown into the spotlight in 2017, when Instagram came under fire amid reports that British teenager Molly Russell took her own life after viewing graphic material about self-harm and suicide on the platform.

Instagram was quick to announce increased investment in stricter content moderation systems that combined human moderators and artificial intelligence to tighten its grip on harmful content. YouTube said it removed 8.8m videos between July – September 2019, 93% of which were automatically removed through machine learning before the clips received a single human view.

Critics, however, continued to argue for independently-set rules and tougher penalties. That role looks set to be filled in the UK by Ofcom, armed with a set of freshly-granted powers that the government says will enable them to “lead the fight for an internet that remains vibrant and open but with the protections, accountability and transparency people deserve.”

A new chief executive, Dame Melanie Dawes, takes the helm at Ofcom next month tasked with holding social media giants to account over commitments to identify, and quickly remove, harmful content, take steps to prevent it from appearing in the first place and meet high expectations for “particularly robust action on terrorist content and online child sexual abuse.”

Other nations have already established regulation governing social media companies in their territories: Australia’s Sharing of Abhorrent Violent Material Act 2019 introduced custodial sentences for technology company executives and fines of up to 10% of global turnover. In 2019, Germany demanded €2m from Facebook for under-reporting of illegal activity on its platform.

censoring social media

Closer to home people are more likely to encounter or be exposed to “hateful” content online than ever, according to Ofcom’s Adults: Media use and attitudes 2019 report. The impact on the younger generation, many of whom become ‘digitally independent’ at an early age, is a concern. Last year, 79% of 12 to 15-year olds reported being exposed to something potentially harmful online. And with nearly half of parents with children aged 5 – 15-years old expressing concerns about children seeing self-harm content online, up 39% on 2018, the move to appoint a watchdog to enforce a legal duty of care on social media firms won’t come as too much of a surprise.

While ensuring we protect vulnerable members of society from content showing terrorism, violence, cyber-bulling, self-harm or child abuse, we should also recognise that user-generated content can be a force for good. Encouraging debate and discussion about important issues, raising funds for good causes, connecting friends and family, and enabling companies to engage directly with their customers all bring immense value to our lives.

But what impact will Ofcom’s new regulatory powers have on business? The real change is likely to be felt by companies who own social media sites and websites that host user-generated content whether through forums, comments or videos.

The Department for Culture, Media and Sport estimates fewer than 5% of UK businesses will be affected, hinting that legislation would be shaped to minimise the burden on small businesses, especially those dealing in B2B who could be excluded. However, critics argue that the rules will cover websites of all sizes, and small businesses who cannot afford the resources they need to police user-generated content could find themselves being punished.

News media brands keen to avoid restrictions on their ability to share and report on content that, while graphic or unpleasant, is otherwise in the public interest were quick to call on government to for an “opt out”, arguing that would protect freedom of expression and avoid censorship.

Advertisers who rely on social media to reach their audiences can probably expect delays to campaign approvals. With platforms being ever-more accountable, it seems likely that they will adopt even tighter content reviews and advert approvals before allowing campaigns to be pushed live – and that takes time.

For the majority of businesses, and until more guidance is issued, closely scrutinising what content appears on your brand’s channels continues to be good practice. If your website allows comments, you should also think about reviewing its code of conduct to ensure it spells out, in no uncertain terms, what is, and isn’t, acceptable onsite.

The rules, of course, aren’t designed to prevent adults accessing legal content that others may find offensive, so Ofcom will be tasked with working with organisations to navigate the murky waters between content that is in bad taste, and content that is illegal or poses a threat to the user.

It’s a challenging area, and Ofcom certainly has its work cut out, but it won’t be able to do it alone. If we are seriously going to tackle harmful online content, then we need to recognise that we all have a role to play – not just the government and tech companies, but the media, parents, legal experts and everyday users like you and me.

 

This article was written by our Chief Executive, Angharad Neagle, and appeared in the Western Mail newspaper on 24 February 2020.

 

Photo credits:

Main banner: Sushiman          Body copy image: bigtunaonline

Share

Recent