On 15 December 2020, the Government published its full response to the Online Harms White Paper, confirming that the Online Harms Bill will establish a new statutory duty of care which is intended to make online companies take responsibility for the safety of their users.

The Digital Secretary, Oliver Dowden, made a statement to Parliament on the Government’s response.

The Government stated that the legislation will tackle illegal activity taking place online and prevent children from being exposed to inappropriate material.  The legislation will also address other types of harm that spread online – from dangerous misinformation spreading lies about vaccines to destructive pro-anorexia content.

The new regulatory framework will apply to:

  • companies host user-generated content which can be accessed by users in the UK; and/or
  • companies facilitate public or private online interaction between service users, one or more of whom is in the UK.
  • search engines.

There will, however, be exemptions for content published on a newspaper or broadcaster’s website and user comments on that content.

The legislation will set out a general definition of harmful content and activity. It will require companies to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content.

The rules will be enforced by Ofcom, which will be the online harms regulator.  It will issue codes of practice which outline the systems and processes that companies need to adopt to fulfil their duty of care.

Ofcom will have the power to fine companies failing in their duty of care up to £18m, or 10% of annual global turnover – whichever is higher – and will also be given the power to block non-compliant services from being accessed in the UK.  It will also have the power to block access to online services that fail to do enough to protect children and other users.

The government has also suggested that Ofcom will be empowered via secondary legislation to impose criminal sanctions against individual executives or senior managers at technology firms, for example, if they do not respond fully, accurately and in a timely manner to information requests by the regulator.

The legislation will apply to any company in the world that serves UK-based users, with the rules tiered in such a way that the most popular sites and services (those with large audiences) will need to go further by setting and enforcing clear terms and conditions which explicitly state how content that is legal but could still cause significant physical or psychological harm will be handled.

This will include misinformation and disinformation about a range of topics such as coronavirus vaccines, marking the first time online misinformation has come under the remit of a government regulator.

A small group of companies with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1, while Category 2 services include platforms that host dating services or pornography and private messaging apps.

The Government has said that less than 3% of UK businesses will fall within the scope of the legislation and the vast majority of companies that do will be Category 2 services, the UK government.

The Government has issued two  of Interim Codes of Practice on online harms

The Government has also issued a report on transparency reporting in relation to online harms.

The government’s response attracted widespread media comment:

The reaction from NGOs was mixed: