It is happening. Amid rising concern about the overwhelming size, power and ethics all of online intermediaries, governments and regulators are debating a comprehensive new regulatory framework for the Internet. A UK government paper on the proposed Digital Charter is expected in the coming weeks, parliamentary committees will report and the LSE T3 Commission will table proposals in November.

What will result from this –hopefully- will be a new settlement for the internet. Not merely a set of new standards but an institutional framework that involves a combination of ethical self-restraint on the part of the major platforms, improved competencies and self-regulation by users, and a legal framework which incentivises harm reduction and verification.

There is universal agreement that the system is broken. Unfortunately, the proposals on the table so far will not fix it.

A common theme of these proposals is the idea of a new regulator for social media. But there is no consensus on what the new body would do. This post considers the idea in broad terms.

Existing proposals posit the following regulatory roles (there are too many to mention all but let’s just cover a few key functions):

  • Horizon scanning and articulating a collective vision for the internet through consultation (Doteveryone)
  • Developing complaints mechanisms (Doteveryone)
  • ‘Meta regulation’ (also known as co-regulation or regulated self-regulation) (Rowbottom).
  • Protection of Internet users from harmful content and the monopolistic behaviour of the biggest online names. The Times.
  • An appeals mechanism for decisions impacting fundamental rights taken by platforms/ a supreme court for Facebook (Zuckerberg).
  • A mechanism to ensure that companies have processes in place to meet their responsibility under a revised and expanded ‘duty of care’. (Perrin and Woods).

None of these proposals is fully worked up. They lack a holistic account of the incentive structures of our media system as a whole, and a transition plan which reflects the iterative nature of the negotiation process with platforms. And because of the complex and interacting nature of the system, there is a danger of unintended consequences of reform, such as privatised/ collateral censorship.

How Reform Will Work

Part of this is about redefining what is illegal, and part is what happens to illegal content. And then there is the wider regulatory question of how to deal with social externalities (wider benefits and costs) of legal content.

(i) Redefining illegal and harmful content. It is likely in the coming years that through multiple separate processes Parliament will develop new standards and offences that relate to fraud, deliberate misinformation and false statements of fact in various legal areas including election law, harassment, and all defamation. Incitement and hate speech will also be redefined, for example to include misogyny. Misinformation is a more difficult category. Whilst ‘truth police’ must be avoided, decentralised systemic incentives to filter deliberate lies and misinformation can be improved: a great deal can be achieved simply by developing the law on what constitutes deliberate, malicious misinformation according to a high standard and calibrating the appropriate remedies, but hugely expanding the category of illegal content, or introducing uncertainty/ unchecked discretion would chill speech.

(ii) Dealing with Illegal Content. It is also going to be necessary to keep under review procedures for notice and takedown and the overall liability framework. A perceived reluctance to take down content after platforms are notified has been a feature of the debate on hate speech and terrorism for example, and led to the German NetzDG law. If content is illegal, the responsibility of hosts to remove it must be clearly defined including time limits. But there are dangers in doing too much. Can privatised censorship by dominant platforms be avoided? Disagreement about constitutional treatment of free speech is a factor here: platform takedown is not state censorship, but at what level of platform dominance does it become equivalent to it? Liability may need to be tightened, but only within a much improved independent system of monitoring and transparency.

(iii) Algorithmic filtering to deal with social externalities of content. The more fuzzy category of content considered socially undesirable but not illegal – and it’s opposite: ‘quality’ content such as reliable news. Decisions about these categories of content have been considered ‘editorial’ concerns best carried out autonomously from the state and law. Much of the systemic challenge of fakery and online unpleasantness resides in this category; and is therefore outside of the remit of these attempts to tighten illegal content procedure.

(iv) The longer term. Fiscal and competition incentives. It is often asserted that nation-states are powerless and can do nothing about powerful global platforms. Whilst international cooperation will eventually be needed, even mid-sized economies such as the UK do have a range of longer term options such as reforming sector-specific competition law to introduce public interest tests, and introducing new tax regimes. Along with (i) and (ii) these policy levers are the threat that states can wield in order to get platforms to act, but this must be done in a careful, measured, transparent way. Otherwise they will themselves constitute censorship.

Can we fix this with a new ‘duty of care’?

Introducing new duties of care for the platforms is not a complete answer. As Doteveryone put it: “there is a tendency to focus on individual rights and issues such as safety, data use and security which crowds out concerns for social impacts.” (Regulating for Responsible Technology May 2018; p3). Harms matter of course and must be addressed, but the problem of misinformation and the overall quality of our media systems is not one that can be captured by the overarching notion of harm because social externalities are more than the sum of individual harms.

The regulatory solution that is driven by liability and complaints and focusing on legal remedies and liability addresses only part of the problem, because only part of the problem is manifest in individual harms that would meet a legal threshold. It is not clear who has grounds to sue for a harm to ‘democracy’, and such harms are vague and not justiciable. In regulatory terms, wider social externalities are a separate issue from harms because they are by definition not measurable at the level of individual user. The ‘duty of care’ is owed to society not just an individual, so regulators must have monitoring duties independent of complaints.

Process Matters

We also know that the endemic conflict of interest between government and media gatekeepers is fraught with danger. Developing institutional solutions for these problems will require better, independent information about what is actually happening on the platforms. This is sensitive, for both commercial and privacy reasons and cannot be published widely. However it can be disclosed to a regulatory authority.  That regulatory authority must be independent of the government of the day and information and monitoring of the process of opinion formation in society must be a function which is carried out independent of the state.

Supporting Quality Media: the role of taxation, and levies.

The crisis of funding of reliable news and information, particularly at the local level has led others, such as the campaign for media reform to propose that platforms be subject to a levy to raise money directed at funding news or quality content.

Whilst distributed funding models are often criticised on free speech grounds, it is possible to address these concerns. The deeper problem with these proposals, which draw on a long tradition of proposals for contestable funds for public service media, is that content will be funded with no guarantee of an audience for that content. Therefore levies need to be accompanied by a system of public accountability to ensure that platforms also make public value content more discoverable or findable.

Part of the reason that these policy proposals favour a minimal harm based approach to social media content regulation is a legitimate fear of overreach: namely the concern with a potential for a regulator to establish a “ministry of truth” attempting to set standards for all media. Nobody wants this. But a smart, incentive-based approach to the systemic improvement of media by no means necessitates such a Draconian approach.

Why do we need a new authority?

In particular, such an authority would be a permanent institutional forum in which to conduct a public process of consultation and negotiation about the ethics and responsibilities of social media platforms and intermediaries. In place of volatile processes of moral panic would be an evidence-based transparent discussion based on hard data about the scope and extent of social problems such as misinformation.

What would the regulator do?

Standard Setting

Forum for pan-industry standards and codes of conduct creation which would incorporate civil society input and be guaranteed independence from Government. Codes of conduct are proliferating.  But they are often vague about objectives and standards of self-regulation. In time there will be a need for a core, unified cross industry code of conduct that deals with issues of hate speech, harassment, child protection and other content related issues together in order to simplify procedures and provide standards that the public can understand. An authority could set out common standards for regulation.

Monitoring

Monitoring behaviour against industry standards and audit of procedures. Meta regulation of platforms against the standards.

Adjudication of complaints

An independent ‘appeals body’ above internal complaints procedures and dealing with disputes on an ombudsman model.

Sanctions

In the event of complaints being upheld a range of sanctions from apologies to fines as well as settlement through arbitration.

Transparency reporting on takedowns

Privatised platform censorship raises wider concerns about chilling effects and should be subject to proper independent monitoring and reporting.

Algorithmic transparency

Search and social media have been subject to criticism of bias by all sides of the political debate including the president of the United States. This undermines trust in democratic deliberation. An independent body can help repair this by algorithmic audit on the model suggested by Frank Pasquale. This should include assessments of whether relevance algorithms promote findability of content that meets minimum quality requirements defined by Parliament, but assessed independently.

Public – private quality media incubator

In time it may be necessary to implement a contestable fund to support quality media. In the short term this should be piloted, funded by a levy on UK generated ad revenue.

internal governance advice

Media companies historically have resolved many key problems of ethics and conflicts of interest by observing internal structural separations. Separation of advertising and editorial, for example and “Chinese walls” between news and other operations. A regulator can offer independent ethical good practice advice.

Policy advice and public engagement

All of these various assessments and research and monitoring functions will enable the body to input hugely valuable policy advice to enable Parliament to make assessments of the future regulatory settlement and the necessity of future adjustments to (i), (ii) and(iii) or the wider competition and fiscal incentives. The body should have duties to report annually to Parliament and offer policy advice to government when requested, and should be resourced adequately to do so. It should also proactively engage with media literacy and develop the notion of “due trust” and “assessability” and potentially audit of kite marking and labelling schemes.

What powers would the authority need?

The most important new powers this regulator would have would be to compel companies to disclose forms of industry data sufficient to permit the authority to perform its regulatory function. It is widely acknowledged that there is an evidence gap in the policy-making around platforms and the wider social benefits including misinformation, harms, child protection and media plurality. It is difficult for regulators and policymakers to assess the scale of public arms and public benefits associated with these platforms in order to assess the rationale for intervention. Reports provided by the platforms and initiatives such as the Facebook transparency inititiative do not provide data in a format that enables meaningful comparison of companies.  Forms of data could include lists of the most viewed and shared individual items online; detailed data on moderation activity for example those outlined in the Santa Clara principles of online content moderation. This would constitute a toughening of voluntary transparency approaches that have been announced in response to recent scandals.

Where should this authority be housed?

This is a secondary question: there would be arguments for a new body, or for housing this within Ofcom or the ICO.  What is clear is that there is a need for these research, monitoring, advice, adjudication and standard-setting functions to be housed within a single institution. This institution must have structural guarantees of independence from government, in order to protect fundamental values of freedom of expression and democracy.

To sum up: we do need a new authority for Internet regulation. But this should not be a regulator of the Internet, which will continue to be regulated by the general law. We do need an authority that will feed much higher quality information into the policy process for developing that law and we also need a regulator to establish standards and regulation working in collaboration with the larger Internet intermediaries which deal with social externalities of Internet content. It makes sense for these functions to be carried out within one body.

Damian Tambini is Research Director, Department of Media and Communications, London School of Economics and Political Science

Twitter @damiantambini