The regulation of digital intermediaries has been an increasingly high-profile issue. Earlier this month, The Times called for the creation of a statutory regulator – which they would call Ofnet – ‘to protect internet users from harmful content and the monopolistic behaviour of the biggest online names’.

The demands reflect concerns about the proliferation of extreme content, hate speech, online bullying and the misuse of users’ data. Underlying the call for regulation is a sense that the legal regime for digital intermediaries – largely established at a time when some of the current tech giants were rising stars in an evolving industry – fails to do justice to the sheer power wielded by the largest companies.

It is often pointed out that digital intermediaries are already subject to some regulatory bodies. For example, if a complainant is unhappy about Google’s response to a right to be forgotten request, then he or she can take the complaint to the Information Commissioner. Video-on-demand services, such as Amazon Prime, are regulated by Ofcom (although the regulations are fairly minimal when compared with those applied to television). However, the system of online regulation is relatively piecemeal, and the key question is whether a more comprehensive system should be overseen by a single regulatory body.

Any system of regulation is laden with risks and challenges. While a regulatory body may be able to protect rights in a way that is cheaper and quicker than formal legal proceedings, its adjudicatory procedures will lack the rigorous evidential rules found in courts. There are also the risks of unintended consequences. If a regulator imposes strict obligations and heavy penalties, then a risk averse company may take the path of least resistance. In the case of a digital intermediary, that could lead to the company being incentivised to take down or block content whenever it receives a complaint (regardless of its merits). These risks have to be considered by the various bodies (such as the House of Lords Communications Committee) currently considering internet regulation.

While such risks and compromises are a common issue for regulation in general, a major challenge in relation to material on the digital media is the problem of scale. The sheer amount of content that is posted on and hosted by a number of digital services means that a central regulatory body would only have the resources to investigate and hear complaints in a small fraction of cases. It would be hard for an Ofnet (to use The Times’ proposal) to regulate digital content in the same way that Ofcom oversees broadcast standards and video on-demand.

One strategy to address this issue is to implement a framework in which the regulatory body does not lay down the precise standards for content or enforce those standards in every case. Instead, a company could be left to devise its own content standards and processes, and the regulator would oversee those internal processes to make sure that certain benchmarks are met. There are various models of regulation that seek to combine elements of self-regulation and state regulation. For example, in their chapter in the Oxford Handbook of Regulation, Coglianese and Mendelson refer to an approach known as ‘meta-regulation’, which is described as ‘a process of regulating the regulators’:

‘Meta‐regulation refers to ways that outside regulators deliberately—rather than unintentionally—seek to induce targets to develop their own internal, self‐regulatory responses to public problems.’

While regulation scholars debate the precise meaning of such terminology and the differences with other models, my point is simply to show how an approach can be taken that assigns a significant role to the regulated companies to implement systems that pursue regulatory goals.

How would this work for the digital intermediary? A statutory regulator could require or incentivise the intermediary company to come up with its own content code with provisions on privacy, hate speech, extremism, etc. The regulator would not expect any specific wording or formulation to be used in the codes, but would demand that certain ground is covered. The company could also be expected to have a process of consultation when devising its content standards. The regulator could require the company to put in place a process to adjudicate on complaints, which meets standards of procedural fairness and has a sufficiently independent process of appeal. The company could be required to meet certain standards for transparency, for example in publishing the criteria for content decisions and, where appropriate, informing people when and why content has been removed.

This approach to regulation need not be limited to content standards. The regulator could also require the intermediary company to have a process in place to hear complaints about algorithmic decisions. In addition, the company could be under an obligation to devise a policy in relation to elections – setting out transparency rules for political advertising and giving a prominent place to messages from political parties in the weeks prior to an election (with prominent positions being allocated to parties on fair terms).

The list of topics for regulation can go on. The central point is that such a system (whether it is described as meta-regulation or by some other name) is less prescriptive on certain matters and leaves the company to develop its own internal methods. Consequently, the expertise of the company can be relied on to meet regulatory goals. Such an approach allows for a level of flexibility in the operation of the regulatory standards, taking into account the differences between the various services offered by the companies. The most appropriate process for complaints handling may be different for a search engine than for a social media company. A system of regulation that devolves certain issues to the company is more likely to accommodate those differences.

A system that devolves such matters to the companies, however, faces challenges of its own. In particular, it puts a level of trust in the intermediary companies to find ways to meet the regulatory goals – and whether the tech giants really deserve such trust is a matter of debate. However, this concern should not be overstated as the approach is not the same as pure self-regulation and the adequacy of the system would be checked by a statutory body.

A further concern is that the decisions left to the intermediary body raise important issues of freedom of expression. Simply leaving it to companies to choose the terms of a content standards code and to enforce it opens up the risk of private censorship. The risk is that the company would act as both legislator and judge in making decisions that have a massive impact on the views and ideas that are likely to be heard.

Given the potential for private censorship, such a devolved method of regulation alone is not enough. Instead, that method could be used in conjunction with other more traditional forms of regulation and co-regulation. While the regulator could have a primary role in overseeing the internal systems of the companies (as described above), it could also act as a final tier of appeal in relation to certain decisions. So, in some cases where complainants or content producers are unhappy with the company’s internal decisions, they could take the issue to a statutory regulatory body (which provides a more independent alternative to Mark Zuckerberg’s suggestion for a ‘supreme court’ for Facebook). Having an external body to hear final appeals could allow for a more transparent and reasoned decision in those cases where the complaint raises a point of principle and where consistency is particularly important.

In the current climate, it seems that the key question is not whether digital intermediaries should be subject to a more formal system of regulation, but how this can be implemented. No system is flawless and all regulatory methods pose some risks. A more devolved system of regulation, that looks to internal systems and procedures, offers one possible option that could combine the flexibility of self-regulation with some independent oversight and respond to some of the challenges of scale.

Jacob Rowbottom is a Fellow of University College, Oxford and is the author of Media Law (2018).

This post was originally published on the LSE Media Policy Project Blog and is reproduced with permission and thanks.