Despite the concerns raised and criticism surrounding the acquisition of social media platform Twitter by the billionaire Elon Musk, on April 26 Twitter’s board agreed to a $44bn takeover, sending shockwaves across the Internet. The deal is currently ‘temporarily on hold’ as Musk announced (via Twitter) on May 13 pending clarification of the true number of spam accounts on the platform, but Musk insists he is committed to the purchase.
Musk, who is self-declaredly a “free speech absolutist”, stated in his “victory” tweet after the initial announcement of the agreement that “free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated”, calling for “significant improvements” in the services, including in its content moderation and curation practices of this online platform in order to “unlock its tremendous potential”.
In the midst of these developments, we should stop and reflect on the implications that this takeover might have for today’s online media environment. As stated by Professor Victor Pickard, Musk’s statements leading up to the buy-out contained troubling clues about his hopes and ideas for Twitter. Beyond some of the arguably simpler and yet unconventional proposals, such as advocating for an edit button, Musk has implied that Twitter should amend or even abandon its content moderation policies and follow his preferred version of free speech. While these may at first appear as empty statements of “a billionaire splurging on a new hobby”, any significant changes in these practices will still have relevant implications for freedom of expression, media freedom and pluralism online.
Let us then take a step back and look at Twitter’s recent developments in this area. In the past year, Twitter has been actively working to establish itself as a defendant of “the open internet” not only through advocacy campaigns and stakeholders coalitions, but also through practical improvements in its content moderation and curation practices, as well as, in its transparency and accountability frameworks. Ranking Digital Rights (RDR), an independent research think tank, has benchmarked all big tech companies, including US-based social media platforms, against standards that set high but achievable goals for corporate transparency and rights-respecting policies. In their 2020 Ranking Index, Twitter actually featured as “the best of the worst”, scoring higher than all other platforms they evaluated (See Fig. 1), because in comparison to its Silicon Valley counterparts, the company was more transparent about actions it took to remove content and suspend accounts for violations to its platform rules, and it also significantly improved its transparency about ad content and targeting rules. But, admittedly, “the bar was dismally low”. Plus, these efforts came after numerous public pressures from NGOs, researchers and policymakers who – for years – have been calling for significant changes and improvements in content moderation practices of social media companies.
Fig. 1. 2020 RDR Index Ranking for Digital Platforms (source: https://rankingdigitalrights.org/index2020/)
Twitter among others though seemed to at least have tried to respond to such public calls. While trying to improve its transparency and accountability system, the company has also started to distance itself from the other social media platforms through the launch of an Open Internet Alliance, whose motto #MoreThanFour is blatantly against the digital dominance of a handful of big tech giants. This alliance supports a harmonised regulatory framework that protects consumer choice and privacy, promotes a transparent, open source and decentralised internet, and fosters a non-binary content regulation. By non-binary content regulation, the alliance refers to the importance of not only upholding a binary model of maintaining or deleting content, but also addressing issues around content discovery and prioritisation practices, thus going to a certain extent against the free speech absolutist claims of Musk. Indeed, as argued by Mazzoli and Tambini in the Council of Europe study on “Prioritisation uncovered” and the subsequent Council of Europe guidelines in this area, alongside with negative behavioural duties, positive obligations are also needed to ensure effective exercise of freedom of expression and a framework aimed at safeguarding plurality and diversity of media types and content.
Whether the claims of this new alliance are genuine, or whether they are just part of their advocacy and/or corporate responsibility strategies is yet to be seen, but this alliance does also show that there are smaller and alternative platforms that are striving to differentiate themselves and their services from the dominant models of companies like Meta, Google and Amazon. At the same time though, it is certainly not a coincidence that this shift in policy discourses and positions came at a time when European policymakers are attempting to steer the development of the digital environment through a new wave of regulatory initiatives. The recently agreed Digital Services Act (DSA), together with its twin proposal for a Digital Markets Act (DMA), and the upcoming European Media Freedom Act, possibly represent the clearest examples of how European institutions are striving to establish a new social contract by shaping a design for a new settlement between governments, tech giants, private actors and citizens. With these regulatory efforts, to a certain extent, the European institutions have come to treat social media and online intermediary services as forms of quasi-public utilities, by introducing enhanced transparency, due process requirements and human-rights assessment frameworks..
Thus, Musk’s aggressive takeover of Twitter and its worrisome plans for the future of this platform need to be contextualised in this broader framework: a battle between private and public interests where governments, private actors and companies strive to gain control over the ways in which information and content circulate online. To a certain extent, as emphasized by Professor Pollicino and law experts De Gregorio and Dunn, the current organisational structure of online platforms, based on private ownership and driven by market logics, seems to be at odds with the increasingly public and societal role of online platforms, especially social media. Numerous academic experts and human rights advocates have indeed argued that until we radically democratise these platforms and treat them as the essential public infrastructure they are, intended as shared resources that shouldn’t be governed by market forces alone, “Musk, Trump or some other petulant billionaire can come along and make them their playthings”. In other words, one of the main risks is that we leave the power of governing online speech and media to private individuals or a handful of private companies, which have vertically and horizontally integrated along the media value chain and gained control over key gateways to content and information.
How can we avoid this risk and foster this much needed change of paradigm? Ideas for policy reforms have been flourishing in recent years, especially in academic and civil society circles. Some experts have call for a re-imagination of the platform economy as a whole. Such as reforming existing platform governance models or re-shaping platform-driven digital markets. While others have focused on increasing the accountability and responsibility of these companies through a duty of care on safeguarding privacy, or ensuring media pluralism and diversity online by unbundling hosting and content moderation/curation activities and introducing positive safeguards to improve content prioritisation practices online. Within this expanding body of research, one should draw the attention on the latter point, which I deem to be particularly relevant in view of the changes that we might see in Twitter’s content moderation and curation practices.
As I have previously argued, content moderation and content curation measures are two sides of the same coin, and they sit at the heart of digital intermediary services. The core purpose of digital intermediaries like social media is indeed to moderate, curate, select, and filter what content can be found on their services. Soft behavioural nudges behind those measures can channel users choices in one direction or the other, through processes that law Professor Karen Yeung describes as “subtle, unobtrusive yet extraordinarily powerful”. Thus, even though the newly introduced DSA provisions are pivotal, they are not the only solutions. While these provisions regulate content moderation practices through enhanced transparency, content removal requirements, and mandatory risk assessments over algorithms to fight harmful content and disinformation are pivotal, I would however argue that a positive rights philosophy is also needed to complement these rules.
In particular, the shortcomings of the existing frameworks are especially evident when it comes to the moderation and curation of news content. As highlighted by the Reuters Institute, since 2019, social media and search engines have overtaken television and traditional media in terms of reach for “first contact with news”, as these platforms have become integral to how people find and access news all over the world. Negative behavioural duties to increase their transparency and accountability are therefore key, but they do not address the fact that content prioritisation processes are often governed by commercial and market logics. These negative duties should therefore come hand-in-hand with positive safeguards to introduce and promote public interests objectives in the governance of social media platforms.
To achieve this goal in the context of online news, some civil society and news organisations around the world have advanced both technical standards, such as the Journalism Trust Initiative (JTI), as well as guidelines and indicators – such as News Guard and its ‘trust ratings’ or the Trust Project with its ‘trust indicators’ – to set a higher bar of professional norms for media outlets and define what “public interest news providers” are. These standards and initiatives could also be used to improve content moderation and curation practices on social media. For instance, JTI is calling for these standards to be factored into the algorithms of search engines and social media platforms, in order to recommend and make more prominent “reliable and trustworthy sources of information online for the benefits of societies and democracy”.
However, as the European policymakers develop their new regulatory framework for digital service and media freedom, it is important to reflect on how regulation could support – or negatively impact – these developments. Indeed, while prioritisation algorithms have potential to promote trusted news sources, they can likewise be exploited for soft forms of censorship or propaganda, having implications for democracy and human rights. Thus, as highlighted by the European Digital Media Observatory, to limit potential biases and undue discriminations, transparent and procedurally fair processes will be crucial to ensure that any new positive safeguards do not have unwanted consequences on freedom of expression and media freedom.
Even though it remains to be seen whether Musk would embrace signing up to these emerging public interest principles and to a non-binary content regulation approach, certainly regulating content online will be a far more challenging task that tweeting his eccentric ideas and free speech absolutist positions. As we are addressing shared issues of flawed platform governance systems, industry and policy practices would benefit from a more coordinated approach and more inclusive participation of civil society organisations. Twitter as a company will have to find a balanced approach to content moderation and curation. One that is not solely driven by the commercially- and privately-driven aspirations of its main shareholders, but one that takes into account the responsibilities that come with being a “digital town square.”
Eleonora Maria Mazzoli is a PhD Researcher and research assistant at the “Data, Networks and Society” Programme of the Media and Communication Department of the London School of Economics and Political Science (LSE). She also works as external expert and consultant on media innovation and ICT-related projects.
This post originally appeared on the LSE Media Policy Projects blog and is reproduced with permission and thanks.