The International Forum for Responsible Media Blog

Preventing social media harm: an idea – Simon Carne

There are widespread calls to regulate social media. Hardly a day goes by without some new outrage which eclipses what we have seen already. One of the great problems for anyone wishing to put a stop to the abuse is that social media users can easily make themselves anonymous

If they are ejected from a platform, they can re-enrol under a new identity. All it takes to open an account on a platform is an email address. And all it takes to get an email address is … absolutely nothing at all.   

So let me hypothesise a system in which platforms did not allow people to open an account unless they enrolled using an email address which was registered with a (new) social media regulator. The regulator would identify registered users by reference to a unique, official identifier which cannot (normally) be faked, for example in the UK, this might be the user’s National Insurance (NI) number. Other identifiers might be used instead, but for convenience I will adopt the NI number as the identifier throughout this article.

The social media platforms themselves would not be given access to the users’ NI number. The platforms would only need to verify that the user’s email address was registered on the regulator’s database. Only the regulator would need to know the NI number used to register the email.

Users could still enrol on the platform under a pseudonym. But the platforms would be able to issue warnings to mis-users – even anonymous ones – and throw them off the platform in the case of serious or serial misconduct, confident in the knowledge that, once ejected from the platform, the abuser could not simply re-register under a different email address. This ability would create the incentive and the means for social media platforms to enforce their own codes of conduct.

In more extreme cases, where abuse was criminal, the police and other relevant authorities would be able to identify the source by asking the regulator to identify the owner of the email address.

In order to kick-off such a system in an orderly fashion, platforms might only require new users to have a registered email address. Existing users could be allowed to remain without their email address being verified, but any mis-behaviour would trigger the need to for the email address to be verified against the register or else the user would be thrown off the platform. Over time, platforms might extend the requirement to all users.

People do, of course, change their email addresses from time to time. The regulator’s database would need to allow for that. The regulator would not only store registered email addresses and the associated NI number, it would also be able to store a note that a platform had ejected that person. So, even if a user registered a new email address (whether for legitimate reasons or not) and then reapplied to a platform, the regulator would not only notify the platform whether the address was registered, but also if that platform had previously registered any notes against that user.

To accommodate children, there could be a facility to register an email address against a parent’s NI number. This would be subject to the parent’s permission and would afford an opportunity for the parent to verify the child’s date of birth (perhaps with a facility for parents to restrict which platforms the child could use). With the date of birth stored for those who register under a parent’s authority, the system would also know when a child had become an adult.

There may also need to be an arrangement to facilitate those who have a legitimate need for anonymity and who fear that the regulator may not be able protect their identity from being revealed. This could be done by registering through, for example, a solicitor.

The internet is, of course, international. And, although a platform can identify a user’s location, those who wish to misbehave might set up a new account whilst they are overseas, in a territory that doesn’t have a regulator, only to return to the UK and misbehave under a cloak of anonymity. This could be addressed – or at least mitigated – by the platform ejecting the abusive user, knowing that they would be unable to re-enrol until they go abroad again to a country which doesn’t operate a regulated database.

In this way, abuse by users from a regulated country could, in time, be limited to occasions when the user was overseas. If a system such as this one were found to be successful, and an equivalent arrangement was widely adopted in different countries, there would be fewer and fewer destinations in which an abuser would be able to enrol on a platform without first having to produce the required identifier for that country.

I have not (as yet) seen this idea described anywhere else. I publish it here at a time when there is a growing sense that social media platforms should be placed under a statutory duty of care – an idea being developed by William Perrin and Lorna Woods through the Carnegie Trust and taken up by Select Committees for CommunicationsScience and Technology and Digital, Culture, Media and Sport. The ideas in this article would sit alongside – or form part of – such a duty of care, which I fully support.

This post originally appeared on the Simon Carne Consulting+ website and is reproduced with permission and thanks

3 Comments

  1. Manzar

    Hi Simon, I must say, it’s a well-written and well-researched post. Just want to add one thing for the readers. As human beings, we need to understand that self-acceptance and self-esteem are crucial for our good mental health. Comparing ourselves to others, be it online or offline, can undermine these things, giving rise to feelings of inadequacy, thus leads us to feel sad and empty. Anything that affects our self-acceptance and self-esteem should be dealt with an iron fist, whether its a toxic relationship or the cherished social media platforms such as facebook and twitter!

  2. Simon Carne

    Ignore that reply. I posted it on the wrong page. It was supposed to be placed here: https://inforrm.org/2021/05/01/should-social-media-regulation-be-aiming-at-a-different-goal-simon-carne/#comment-172149

Leave a Reply

© 2024 Inforrm's Blog

Theme by Anders NorénUp ↑

Discover more from Inforrm's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading