Social media permeates so many areas of our lives it would be difficult to imagine a world without it, but there are increasing concerns about harmful content posted and how to combat it.
This week, amid growing cries for greater regulation, the UK government unveiled new measures aimed at ensuring the UK is the safest country in the world to be online.
The Online Harms White Paper – a joint proposal from the Department for Digital, Culture Media and Sport (DCMS) and the Home Office – recommends the introduction of an independent regulator who will enforce the stringent standards. Under the new laws, social media companies and tech firms will be legally required to protect their users – and face tough penalties if they do not comply.
The plans – now out for public consultation – cover a wide range of issues from spreading terrorist content and racial hatred to other harmful behaviours such as cyber-bullying and trolling.
The White Paper has been welcomed by some charities and the police, and it’s not hard to see why.
When the family of Molly Russell, who took her own life aged 14 in 2017, looked into her Instagram account, they found disturbing content about depression and suicide. Molly’s dad feels Instagram is partly responsible for her death and launched a campaign to highlight that self-harm and suicide were widely promoted on Instagram.
And just two weeks ago, the UK’s Health and Social Care Secretary, Matt Hancock, urged social media platforms to do more to combat anti-vaccine myths that he believes are contributing to a decrease in the measles, mumps and rubella (MMR) jab uptake.
But perhaps one of the most disturbing examples, which took place just last month, was the New Zealand attack on 15 March, when the gunman used Facebook Live to stream the killings.
However, while few in the UK would disagree that some form of regulation of social media content is now needed, there are many who have expressed concerns about whether the solutions proposed will effectively address the most important and complex online issues currently facing our society.
For instance, much of the harmful activity the White Paper attempts to tackle is already illegal under existing law – such as child sexual abuse and terrorist propaganda. It is enforcing these laws that have proved most difficult so far, and critics are unclear how the paper’s proposals will change that.
Another concern is the censorship of content – who will determine what content is harmful and what is not, and who will regulate this?
In fact, defining what constitutes some forms of harmful content without negatively impacting freedom of speech has been a challenge in the past. For example, it was recently revealed the government has abandoned attempts to create laws to tackle extremism because the process has proved ‘too difficult’. Lord Anderson QC, the reviewer of extremism law, said the definition of extremism adopted by the government was broad and ill-defined and risked making legitimate religious and political activity illegal.
There is little question that social media channels have proved a force for good, providing immense value to our lives. They help to connect people all over the world; offer platforms for debate and free expression; enable charities to raise awareness and funds for good causes; provide an opportunity for businesses to engage with their customers and give us access to information quickly and easily.
But, as the proliferation of content channels continues, it has become too easy to access harmful content. If we are to protect individuals and the next generation of technology users from material that risks their safety, we all need to work together on a solution – technology companies, governments, legal experts, the media, parents and everyday internet users alike.
Facebook’s founder, Mark Zuckerberg, recently called on governments and regulators to play a more active role in establishing rules to control the internet. Now, social media companies and the government must work together to identify the problems that are not currently covered by law and how they could be, as well as how regulation that protects the most vulnerable without hindering freedom of speech could work.
Allowing tech companies to develop products and services that we want while ensuring vulnerable users are afforded adequate protection from harm will be a difficult tightrope to tread, but it’s one which cannot be avoided.
This article was written by our Group Managing Director, Angharad Neagle, and appeared in the Western Mail newspaper on 12 April 2019.