Before the Christmas break, twitter indicated how it intends to tackle two of its main challenges in 2014, namely: trolls and the spread of false information. This article provides a brief summary of these strategies (one of which was disclosed by a leak) and comments on their effectiveness.
Internet trolls, surviving mainly on a hotbed of online forums, existed long before twitter. Confined largely to gaming websites, these early trolls were restricted in whom they could attack. With the advent of social media, however, the list of potential victims has grown exponentially. The twitter troll is now one of the most widely feared creatures on the web.
Part of the reason for this trepidation stems from increased media coverage which has highlighted the severity of the abuse experienced by victims. A search for articles containing the terms “Twitter” and “Troll” on the MailOnline for 2013 – a publication that is usually a reliable barometer of all things social media – produces 331 results, an increase of almost one hundred from 2012.
Whilst it is difficult to conclude from these statistics that there has been 1) a proliferation in the number of trolls on twitter or 2) a greater intensity in their behaviour, the mainstream media’s increased focus on the harm trolls can cause has forced twitter bosses to take action. The ‘report abuse’ button, rolled out on all twitter platforms in 2013, for instance, was seemingly a direct response to calls in the press for more protection for victims of trolling, such as Caroline Criado-Perez.
In addition, in December 2013, twitter temporarily introduced new blocking rules which allowed blocked users to continue to follow and interact with the accounts of the persons who had blocked them. The blocked user’s activity was made invisible to the victim as if the offending account did not exist. Importantly, the blocked user was not notified that he/she had been blocked. This is the stuff of Kafka.
Such was the twitter outcry at this policy that the company’s bosses were forced to revert to old rules under which a user is no longer able to follow an account once blocked.
This policy was not entirely flawed and should not have perhaps been sloughed off so readily. As twitter argued at the time, the notification of a block can aggravate the perpetrator and trigger an escalation in the offensiveness of his/her behaviour. The most persistent and abusive twitter trolls can still easily create multiple accounts and renew their campaign of harassment. Alternatively, the harasser might resort to more extreme methods of abuse and intimidation, such as attacking the victim’s colleagues or family.
The instantaneous nature of twitter inevitably leads to some ill-conceived tweets. Hidden behind a screen, a user can often feel uninhibited and might hastily post a tweet without due consideration of the accuracy of what he has written. Whilst a tweet can be deleted, there is currently no other option available to correct an error.
According to a report on the website The Desk published in December 2013, Twitter sources have revealed that the company is in the advanced stages of developing a sophisticated algorithm which would enable its users to edit tweets for a short period of time after they have been posted. The proposed editing feature is designed to be limited in its scope, to include, for example, correcting a typo or adding (and presumably removing) one or two words.
Interestingly, once a tweet has been edited, the changes would also appear on the feeds of users who had decided to re-tweet the original tweet.
Twitter is rightly wary of extending this editing function too far. The possibility of a large corporate entity approaching an individual to request that a popular tweet is turned into advertising slogan is clear. To protect against this, Twitter has suggested that the editing feature would be made available only for a short period of time after the original post and that the algorithm would (somehow) not allow a departure from the overall intention of the original tweet.
The dissemination of false information, particularly on a forum where it has the potential to go viral, should be discouraged and twitter’s attempts to allow it users to edit what they have written in order to improve accuracy are welcomed.
It is difficult to comment with any certainty on the potential pitfalls of this device until it is fully implemented. From a defamation perspective at least, the proposal could have significant implications. As practitioners in this field will be well-aware, the meaning to be attributed to particular words in a publication, including tweets, can often be highly complex. The insertion of a couple of words – which could include the substitution of one name for another – could make what was originally an anodyne tweet something much more offensive. 140 characters is restricted space in which to convey a message and thus naturally every word will be scrutinised closely.
Further, a user could also become unwittingly liable for a re-tweet. The re-tweeter might have agreed that “X is a fraudster”, re-tweeted this message and logged out of his account. By the time the re-tweeter logs back into his account, the original post might have been amended to read “Y is a fraudster” – now a defamatory statement. This amended post would then appear on the re-tweeter’s own feed and the damage might already have been done.
More information regarding the exact nature of the feature is clearly required here but sources suggested in December that the project should be finished in a matter of weeks or months at most. We look forward to seeing these changes.
Rhory Robertson is a Partner and Tom Double a Trainee Solicitor working in the Collyer Bristow Cyber Investigations Unit.