Protests that the government proposals for a duty of care on social media  are a threat to free speech appear to be missing the point. European standards on freedom of expression give states ample justification for regulating speech that is harmful or undermines national security – if regulations are proportionate, prescribed by law and pursue a legitimate aim.

The proposals for a regulator of social media and a ‘duty of care’ have to be seen as part of a much bigger picture, a changing global system of sovereignty and deeper questions for human liberty in the age of robots. Whilst the free speech groups are right to criticise the proposed regime’s vague standards, it is these larger, complex questions that determine whether the fundamental shift of regime is proportionate and legitimate.

There is no denying that these are radical proposals: for the first two decades of the mass-market Internet, liberal democratic states in Europe and North America have had a “hands-off“ approach to the regulation of Internet content. The view was taken that regulation of the golden goose of innovation and productivity that the Internet had become could only undermine free speech and growth.

The reasons that consensus broke down, and radical proposals for internet regulation are now being considered are both cynical and high minded. On one hand recent years have seen a concerted campaign by some newspapers to impose regulation on the platforms that have taken their ad revenue, and they have been effective in enlisting politicians to their cause. The Telegraph has been campaigning openly and persistently for the social media ‘Duty of Care’ for over a year, with editorials and a branded campaign. Their claims have coincided with growing concern about monopoly power in the code layer of the internet.

At the same time, politicians are aware of their duty to protect national security and democracy: the political shocks of the past three years have led to a general acceptance that we have entered a new era of international relations: in the age of permanent cyber and information warfare, states are trying to assert control over online spaces. In part, the White Paper is a defensive move against Russian and other interference in our domestic politics.

Internet regulation 1.0 consisted in absolving tech platforms of any responsibility until they were notified they were hosting illegal content, and a lax attitude even when they were aware. Last year Yvette Cooper berated the poor Twitter representative hauled in front of the Home Affairs Committee because the platform had failed to remove illegally hateful tweets against MPs – despite being notified of them by the same select committee a year previously.

The point was made repeatedly in this committee and others that hate and misogyny online is a free speech issue not because it constitutes speech worthy of protection but because it effectively silences speech by women and minorities because it forces them to quit social media or moderate their messages. States also have a duty to protect freedom of speech by creating safe spaces where real deliberation based on truth and respect can take place.

By proposing a “duty of care” and a specific regulator for online harms the government is proposing an entirely new framework. Under Internet 2.0 platforms will be held to account and fined if they fail to protect users from a long list of harms online.

This is not without its dangers so the free speech groups are right to raise concerns about vague definitions. The proposed social media regulator could be open to capture and control by the state and other interest groups, because such a system pushes control down to the platforms, and obliges them to act as proto-censors. It will not only reinforce the monopoly position of Facebook and others, but it might induce them to take a risk averse approach, saving money by employing armies of Robo-censors with a scattergun approach to speech removal. As we move into the era of ubiquitous artificial intelligence we need to consider carefully all societal frameworks that enable automated processes to take decisions that affect our lives; and creating automated censorship has to be one of the most sensitive and far-reaching of these decisions. If Robots did ever take over from the humans, developing the ability to censor our speech would be a first step.

The challenge in all of this is to strike a balance and ensure that both the process of designing the scheme and the process of implementing it are conducted under the glare of transparency and sunlight. We do need standards to repair our shouty and hateful online spaces, but those standards should be created – and as far as possible implemented by humans: citizens and civil society not by the government, not by Google and not by Facebook. To the extent that social media platforms create these standards, users should be able to choose between different ethical standards by genuine ability to switch platforms through data portability. And whilst we do need protection from misleading and mischievous messages from our enemies abroad, the UN Convention does give us the right to receive ideas from anywhere in the world “regardless of frontiers”.  Designing a new settlement for the responsibility of internet intermediaries is going to be a complex and difficult task, but it is possible.

Damian Tambini is an Associate Professor in the Department of Media and Communications at the LSE.