The Online Harms What?’ Caught in the middle of a pandemic and Britain’s ongoing divorce from Europe, it is not surprising that a piece of media-related legislation might be lost in the quagmire of more important and pressing matters in British politics. But this past year has proven that we need legislation to protect all of us online, now more than ever.
Back in 2017 the Digital Economy Act included some online child protection by requiring Age Verification on legal porn sites. Although law, the government failed to implement Age Verification, saying its proposed Online Harms Bill would supersede it. Matt Hancock, as Culture secretary said that the new laws would make Britain, “the safest place in the world” to be online (The Guardian 19.5.18). Following a consultation, including evidence from the Children’s Media Foundation, the Government published a White Paper in 2019. This proposed that all tech firms, big or small, would have a duty of care to their users regarding terror or child abuse, mis and dis-information, or face harsh sanctions.
The tech firms made all the right noises, talking about codes of practice and there was much talk of tech platforms being ‘safer by design’. Yes, there was much talk. But nobody talking about when. 2018, 2019, 2020 – 3 years that we could have at least had that age verification in place, 3 years lost. And then Covid.
So where is the Online Harms Bill? It was unveiled in a flurry of political action following the news that 14-year-old Molly Russell’s suicide was the result of her viewing online harmful content. It was promised ‘in this parliamentary session’. But what does that mean? Speaking to the House of Lords Democracy and Digital Committee last June, DCMS minister Caroline Dineage MP said that, because of Covid, the Government “would not commit to bringing a draft bill to parliament before the end of 2021.”
Surely it is because of Covid that we need this legislation? The minister’s own colleague and one-time secretary of state that introduced the bill, Jeremy Wright thinks so: “The pandemic has brought into sharp relief the absence of the long promised Online Harms Bill.” (Daily Telegraph 9.10.20).
So where is it?
If Caroline Dineage’s timing is correct, then the bill won’t become law much before 2023 or 2024. As Lord Puttnam has said,
“Here’s a bill that the Government paraded as being very important but which they’ve managed to lose somehow.” As chair of the Democracy and Digital Committee, he went on to say that a potential 2024 start date means it would be “seven years from conception – in the technology world that’s two lifetimes. In a child’s world it is also about two lifetimes. And that is unacceptable.”
The CMF has been flagging the need for internet safety regulations for more than five years, and during that time has regularly highlighted the risks and issues to politicians, policy makers and the industry. As the delays continue, the need for the Online Harms Bill is becoming increasingly evident, not least since the onset of the pandemic:
- The Office for National Statistics (ONS) has said that one in five British children have been bullied online. Between April and October 2020, the NSPCC held more than 1000 counselling sessions with young people about online bullying. (Daily Telegraph 16.11.20)
- During Lockdown the rate of online child grooming rose with more than 13 crimes committed every day; half of which involved Facebook. WhatsApp’s new ‘disappearing messages’ was called ‘gift for groomers’ (Daily Telegraph 11.11.20)
- The Safer Internet Centre, which provides resources to protect children online saw a 50% rise in reports of indecent images in September.
- Submitting evidence to the Government’s Gambling Act Review, research by the Children’s Commissioner has revealed “major harms on online gaming platforms, especially financial harms, which are currently not listed as within the scope of the duty of care.” (Gaming the System 22.10.19)
No-one pretends that online safety is an easy problem to tackle. At its core, the success of the digital industry is built on the sector’s inherent ability to disrupt traditional paradigms, and to learn and evolve at speed. This has led to fantastic innovation and positive experiences for millions of users – but the speed also leaves regulators for dust. It’s worth bearing in mind, for instance, that when CMF wrote one of our first responses to an online safety consultation in 2016, Tik Tok hadn’t even been launched!
As we know, children are often forgotten by policy makers. However, events over the last few months have brought some of the issues back into focus for the ‘grown up’ audience. As you read this, you’ll no doubt be aware of the Anti-Vax campaigns, accusations of interference in the US election, and the stifling atmosphere of ‘Fake News’.
With so many storms engulfing digital media over the last year, especially the social media platforms, it’s evident that self-regulation is failing, and that a more robust framework of governance is required.
Of course, another challenge is that many of the major digital businesses popular with children fall outside UK jurisdiction. As a smaller, autonomous market post-Brexit, the risk is that the UK’s influence on these businesses will wane even further. The CMF would therefore urge the UK Government to work actively and collaboratively with other Countries – and the EU – to develop a coordinated regulatory approach that can be enforced.
The good news is there are some very able people who care about these issues and who will not relent until the government takes action. In November, representatives of the CMF attended an Online event organised by the Parliamentary Internet, Communications and Technology Forum (PICTFOR) that brought together politicians, industry representatives, academics, children’s advocacy groups and charities to highlight the failure to address this serious threat to children’s safety and well-being.
The event was chaired by Lord McNally, who has introduced a Private Members Bill that would require Ofcom to prepare for the introduction of an Online Harms Reduction Regulator, effectively forcing the government to act. The bill was based on work done by Professor Lorna Woods of the University of Essex and William Perrin of the Carnegie Trust. Over the last three years, they have focused on how we can shift the debate from ‘publishing’ and the removal of specific content, to harm prevention.
Professor Woods has worked out a very detailed, legal framework that uses a ‘duty of care’ approach, placing social media platforms into a category of ‘public or quasi-public spaces’. And because these are public spaces and not private walled gardens, the overarching consideration in creating such a space must be user safety, not maximising the number of users or their engagement with the content. And, crucially, the more vulnerable an audience, the greater the responsibility. Lord McNally’s bill is currently awaiting a second reading.
The quality of the discussion in PICTFOR’s online event was excellent, with powerful submissions from a range of organisations. At a time when we are bombarded with stories about the failures of our public institutions, it is sometimes good to be reminded there are still plenty of decent people prepared to give their time and energy to public service.
This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science. This article was originally published on The Children’s Media Foundation website and is reposted with thanks.
Featured image: Photo by Annie Spratt on Unsplash
John Kent, Jayne Kirkham and Colin Ward, are members of The Children’s Media Foundation’s executive committee.
This post originally appeared on the LSE Media Policy Project Blog and is reproduced with permission and thanks
Leave a Reply