The International Forum for Responsible Media Blog

Online Anonymity: Balancing Freedoms – Stephen Kinsella

The murder of MP David Amess has reminded us all of the threats and abuse that those in public life face, particularly online.  This has led to renewed calls to ban anonymous  online social media accounts.  In a recent piece, Harry Dyer argues that this is not the answer, that anonymity can in fact be a form of protection for marginalised communities.

There was so much I agreed with in Harry Dyer’s article. For instance, there is no clear causal link between online abuse and the tragic death of David Amess; it cannot be said that banning anonymous accounts would end all abuse; removing anonymity could have serious consequences for marginalized groups. Indeed the ability to be anonymous online has clearly been a lifeline for many.

It is also true that many of those who disseminate the most egregious disinformation and throw the most vitriolic abuse online do so in their own names. There is a problem with online discourse that goes beyond he debate over anonymity.

Nevertheless, I would suggest that we are being presented with a false choice. Just because some abusers do so in their own name does not mean we should not try to act against those who do so behind a fake name or avatar. And while we should obviously strive to preserve the freedom for users to express themselves without fear of identification and retaliation, that does not mean that everyone should be left exposed to the toxic, racist and misogynistic accounts that are responsible for so much online harm.

At Clean Up The Internet we campaign to improve the level of discourse online.  We have commissioned research, available on our site, that shows that anonymous accounts are disproportionately responsible for the circulation and amplification of disinformation, for example concerning the Covid/5G conspiracy.

We have also addressed the question of abuse, in particular racist abuse, and here it all gets rather murky.  We know from speaking to anti racist organisations and to others who track down racist trolls, that a large number of the accounts responsible are anonymous, or certainly not identifiable. Yet, in response to the abuse thrown at the England footballers following the Euros final, Twitter came out publicly with a claim that “99% of the accounts suspended were not anonymous”.  And that claim was uncritically repeated by a number of journalists and commentators who frankly should have known better.

As an ex lawyer, I remember that whenever a client wanted to push a rather ambitious argument we would ask ourselves whether it would “pass the smile test”.  A claim that 99% of the Twitter accounts responsible for racist abuse were not anonymous, must surely fail that test with anyone who has been on that platform for more than a few minutes. The only way to make sense of it, would be that Twitter is using its own idiosyncratic definition of “anonymity” that would not be recognized by the average citizen.

As you will see from the blog page on our site, I wrote to Twitter in August asking them to explain how they got to 99%.  At the request of Kick It Out  I copied them into the letter.  After all, I reasoned that Twitter might well ignore us but they would at least have the courtesy to reply to the leading organization working to eradicate racism from football. When no reply was forthcoming a British MP asked to be copied into the follow up, and surely a company doing business here would want to respond to the concerns of an elected representative? Weeks later, none of us has had even an acknowledgment.

That we were ignored did not really come as a great surprise. A few nights ago I watched a Channel 4 documentary presented by the ex footballer Jermaine Jenas. One aspect that stood out, was that when he and the police tried to pursue anonymous accounts responsible for extreme racist abuse directed at him, Twitter would not cooperate to provide details of the perpetrators, or even act to take down many of the offending posts.

So what’s the solution? We agree that we don’t want to ban anonymity.  But how about if we turned this debate on its head? We could instead ask the platforms to allow all of us who want to be verified to do so, and have a mark against our account (perhaps a tick) to show to all other users that we are who we pretend to be. Then the platforms could give us the ability to decline to interact with, to receive replies from, accounts that are not verified.  This three-point solution would not be difficult to implement.

At a stroke that could have a major impact.  Footballers, politicians, celebrities and the rest of us who want to communicate with a wide audience could do so, but could choose only to see the replies that come from verified accounts. No longer would we first have to endure the abuse and then individually mute or block accounts, only to see them swiftly succeeded by another account controlled by the same person.

Some will argue that verification has its own challenges, and indeed it does.  But while we know that the vast majority of users would be happy to verify, we must stress that under our proposal nobody would be compelled to verify if they did not want to do so.  Moreover there are good trusted third party tools out there that can provide verification without us having to hand over more data to the platforms, some of whom cannot be trusted not to exploit it to target us with ads, or worse. It’s not our job to promote any particular providers, but those interested could look at OneID or Yoti.

If they wanted, the platforms could introduce the three point solution we advocate within a very short period and could also agree to work with reputable third party verifiers. Why would they not? Well, if verification becomes the norm, that might reveal how far user numbers are inflated.  A reduction in online abuse and anger might reduce “engagement”.  And independent verification could eat into their treasure troves of data.

And that is probably why these changes, and other reasonable protections demanded by citizens, won’t be brought in voluntarily. There is going to be a need for legislation.  We are campaigning with many others to get appropriate language inserted into the Online Safety Bill.  The Bill is likely to be the best chance we will have for some years to effect real improvements in the online space, and (to borrow a phrase much loved by one of the worst offenders), “drain the swamp”.

Of course, that will still leave those who disseminate hatred in their own names.  But even they are amplified and given apparent support by armies of fake accounts, bots and others who they can mobilise to pile-on to those they attack.  As Mr Dyer pointed out, users “take their cues” from other posts, many of which are from accounts that are not identifiable but which they believe must reflect wider public opinion.  Our proposal would do much to reduce the reach and impact of such divisive messages. We might even suggest that the platforms only show a “follower count” that corresponds to genuine verified accounts, which could be very revealing in relation to a number of high-profile shock jocks.

There is much to do.  But we shouldn’t be put off starting just because “it’s complicated”. And we must try to identify and mitigate any possible unintended consequences. But essentially, if you have a right to speak, don’t I at least have a right not to listen to you?

3 Comments

  1. Christopher Whitmey

    Well argued. Best wishes to get some form of remedy into the Online Safety Bill.

    • Stephen Kinsella

      Thanks Christopher. As you will see from our site we have broad cross party support, but all encouragement is welcome. Do point others interested in our direction.

  2. Chris

    Your argument ignores the point you initially made where some people need to be anonymous and unverified. Given current world events a group which is ever increasing. These marginalised groups whose anonymity can be a “lifeline” would be unable to be heard by the people they probably need most to hear them.

Leave a Reply

© 2021 Inforrm's Blog

Theme by Anders NorénUp ↑

%d bloggers like this: