The International Forum for Responsible Media Blog

Should social media regulation be aiming at a different goal? – Simon Carne

This weekend, the professional football community in the UK will boycott social media. The “gesture”, initiated by the anti-racism charity, Kick It Out, and others, is a call to “those with power [within social media] … to do more and to act faster … to make their platforms a hostile environment for trolls”. At the time of writing, British rugby and British cricket has announced that they will join the boycott.

I wish them all luck. Something is needed to deliver major change.

A good place to start might be improved clarity around the nature of social media and the responsibility its providers take for the words and images that appear on their forums. At the moment, much of the thinking is confused and confusing.

Platform v publisher

When Twitter banned Donald Trump following the Capitol riots in January, many commentators were quick to claim that the media giant implicitly abandoned its claim that it is not a “publisher”. Amol Rajan, Media Editor at the BBC, was one of the first to ride this particular bandwagon in his Newsnight piece on 11 January.

Rajan was not alone in trying to frame the debate in terms of publisher or platform. An online search brings up numerous examples of the debate (for example, herehere and here to list just three).

The idea being expressed in these two words is that a publisher is someone who commissions the creation and dissemination of a document and, in doing so, the publisher can exercise total control over the content whereas, under this theory, a platform is a channel through which contributors can say what they want without any control by the channel’s owner or provider.

The first clue that these two words do not adequately define the whole space is that a publisher is an entity (a person or a firm) with decision-making capabilities, whereas a platform is an inanimate thing – a thing which has an owner who possesses the decision-making capabilities. Before launching a platform, the owner has to decide on a bunch of rules. In Twitter’s case, the most obvious rule was 140 characters (now 280). There are plenty of others: if there weren’t, Twitter and Facebook etc would be indistinguishable from each other. So the material that appears on a platform is not – and never has been – completely at the mercy of its users.

Judgemental rules

Some of the platform rules can be enforced electronically: for example, it’s simply not possible to tweet more than the permitted number of characters. Others, such as the rule against hate speech, are judgemental and require after-the-event enforcement. But that may be only a temporary position. It is not impossible to envisage artificial intelligence (AI) reaching a level of speed and effectiveness that would allow each post to be checked against the platform’s rules before the post was released to the watching world.

Once such a capability existed, one might hope that the rules would block posts which violate a set of norms that most jurisdictions enforce, such as a ban on racist comment, incitement to violence and so on.

Which brings us to the crucial bit. If platform owners wish to be shielded from liability for what appears on their platforms, they probably need to be seen to have given up all control over the rules which govern the content. Maybe to a regulator. Or maybe to a recognised rule-setter. Facebook already has an independent Oversight Board to “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up and why”. The Oversight Board’s decisions are said to be binding on Facebook, but it is still early days. We wait to see just how independent the Oversight Board turns out to be.

The technology for implementing the rules is not there yet. But it is not far off. Facebook already uses AI in an attempt to filter out unacceptable content. It’s not always very good, as seen, recently, when it closed down the site for a town in France called Ville de Bitche. The mayor had to appeal to human intervention to get the page reinstated.

Whilst we wait for moderation by AI to be deliverable instantly and effectively, we have to make do with humans carrying out the task. But we don’t need the AI to arrive in order for us to realise that, if the decision on what content to release onto a site is entirely driven by rules – especially rules that are not left to the discretion of the platform owner – it becomes very clear that the mere act of banning a post (or the act of banning the person behind the post in the case of a most egregious breach) does not turn a platform into a publication or the platform-owner into a publisher.

Epilogue

There are those who argue that no individual should ever be banned from social media. Social media (the argument goes) is an essential mechanism of communication which must be available to all comers. This argument has nothing to do with the platform v publisher debate. It is an argument that turns on the notion of a common carrier. It is an argument which I shall return to another day.

This post originally appeared on Simon Carne’s blog and is reproduced with permission and thanks.

3 Comments

  1. Christopher Whitmey

    Rather than ‘publisher v. platform’ is not the nub of the matter ‘anonymous v.named’ contributors? Freedom of speech is to be highly valued – but so must personal accountability for what we say.

  2. Christopher Whitmey

    Thanks for the link. How can we try and change it to what you suggest?

Leave a Reply to Christopher WhitmeyCancel reply

© 2024 Inforrm's Blog

Theme by Anders NorénUp ↑

Discover more from Inforrm's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading