On 7 January 2021, Facebook suspended the account of Donald Trump, President of the United States for an indefinite period. The long-awaited decision of the Facebook Oversight Board on the suspension of Donald Trump’s account resembles the judgment of Solomon: it divided the question into two, and upheld the first part, rolling the second part back to Facebook.

But at the same time, the decision fulfilled the Board’s original purpose: to give precedent judgments, to define principles and guidelines of content moderation.

The decision to suspend Trump’s access to post content on Facebook was found justified, but the fact that the suspension was made for an indefinite period, without even giving any criteria of whether and when the account will be reinstated, was found to violate Facebook’s own terms. The terms, or Community Standards provide for either definite-period suspension, or an ultimate exclusion from the platform. Facebook was ordered to reexamine its decision, and resolve about the appropriate penalty, in compliance with its own standards. To guide this process, the Board provided Facebook with policy recommendations, offering guidelines to be considered. While the core decision of the Board is binding, the policy recommendations are not, although Facebook is obliged to respond to them, according to the Charter of the Board (Article 4. Section 3, and Article 4.).

As a result, Facebook will have to take the decision – and the responsibility – whether to ultimately expel Donald Trump from Facebook and Instagram, or impose on him a definite period ban, according to the recommended guidelines offered by the Board, within six months after the date of the Board’s decision.

Somewhat paradoxically, the Board added that in case of a definite period suspension, Facebook should assess, before the suspension ends, whether the risk of significant harm has receded. If that is not the case, and the influential user still poses a serious risk of inciting imminent violence, discrimination or other lawless action, another time-limited suspension should be imposed.

It is hard to see in what way this is significantly different from the indefinite suspension, perhaps with a promise of regular review whether the reasons for suspension still exist. No maximum number has been defined by the Board, how many times this limited-period suspension can be imposed (which does not prevent Facebook from defining a limit). The Board did not give factors to assess, on what basis Facebook should decide whether the user still poses a serious risk, if the user is suspended and unable to use Facebook. “Influential users” as it is consequently referred to by the Board, certainly are supposed to have other public activity, whether in another social media platform or traditional media. It is not explained how this principle can be applied in cases of non-influential users, whose accounts can also be suspended by Facebook.

Interestingly, the minority of the Board would have been glad to recommend that the suspended users recognize their wrongdoing and commit to observing the rules in the future. This moralising expectation might actually be beyond what a company like Facebook can demand. By accepting the Terms of Services, users commit to observing the rules anyway, and it is not clear what value such a demonstrative declaration would add. Unless of course, the covert goal is to delay or make impossible for a person like Trump to be reinstated at the platform, as expecting an apology from Trump (“withdrawal of praise for those involved in the riots” and commitment to observing the rules in the future is not less than that) is nothing short of naivety.

The decision’s main arguments

The reason for finding the suspension justified are quite clear: at the time, Trump’s posts represented a clear and immediate risk of harm, a danger of concrete rights of individuals, as his words expressed support for the riots on 7 January 2021. On 6 and 7 of January 2021, a violent attack was led against the Capitol Hill, a mob entered the building with force, and five lives were lost in the incident. Trump posted a video and Facebook and Instagram, and added words which mainly sympathised with the attackers, even though a few words directed at calming them. Trump, as president of the United States still in office after the lost elections, reached 35 million followers on Facebook and 24 million on Instagram.

The main issue at stake was to analyse whether and how influential persons and politicians should be treated on social media platforms. Political speech enjoys the highest level of protection on both sides of the Atlantic, as it is supposed to realise the main goal of freedom of expression: participation in the social discourse about public matters of the political community.[1] Restrictions of political expression should be subject to the highest scrutiny. Discussion of political figures’ specific responsibility for statements that incite hatred, violence and hostility started only in the 21st century, probably related to the fact that politicians expressed their opinions without the mediating function of professional media, and the moderating effect of journalists who added context and reflections to politicians’ statements. Social media gave direct access to ”the people” as well, and provided a vehicle to spread populistic views, or rather, anything.

The Board referred to these arguments based on public international law, among them recently issued sources, such as the Rabat Plan of Action, and the UN Guiding Principles on Business and Human Rights (UNGPs), the General Comment No. 34 of the Human Rights Committee (2011), and the UN Special Rapporteur’s report on freedom of opinion and expression A/HRC/38/35 (2018).

The Rabat Plan of Action aimed at harmonising the interpretation of Article 19 (freedom of expression) and 20 (2) of ICCPR (prohibition of incitement to discrimination, hostility and violence. Passed by the UN in 2013, it responded to the challenges of an interconnected global communication landscape, where a massive volume of low-value, low-impact speech is mixed with impactful, and therefore dangerous expressions that incite to discrimination, hostility and violence. A six-part test was invented to define the threshold of harm, and separate objectionable and offensive, but not punishable expressions (a.k.a. “awful but lawful”), and illegal hate speech, or hate speech that, albeit not illegal in all jurisdictions, has the potential to impose serious social harm.

The Board applied these six factors in its policy recommendation to Facebook, encouraging Facebook to rely on these criteria in its content moderati  on decisions. The factors are: context, speaker, intent, content and form, reach or magnitude of the speech, and likelihood of the harm. In the case of the Trump posts, all factors were positive, with the factor “intent” being the only subjective one, but the Board found that Trump likely knew or should have known that his posts potentially legitimised or encouraged violence.

The Board declared that the main distinction should not be whether the speaker is a political figure, but whether the person is influential, one who has large audience. However, it was also expressed that political leaders might have a larger impact, as their followers may feel entitled to act violently with impunity. This substantial element was not further examined by the Board, otherwise it would have been compelled to distinguish between the categories of political leaders and influential figures.

Following the thought of distinguishing politicians’ liability, it might be worth looking also at the effects of political speech on the entire society. Political leaders have direct impact on policies, therefore anyone can assume that their speech will become action. This does not only encourage their followers, but has a chilling effect on speech and actions of those against whom the speech is directed. Based on the speech act theory of communication scholar Austin,[2] hate speech is not only a cause for action, but can be an action itself, which constitutes subordination per se, by damaging the dignity of the attacked social group. However, the speech act theory is not applied in legal argumentation, and the Board’s approach was also more liberal than that, but with the other factors listed in the Rabat Plan of Action, the same outcome was reached.

In giving recommendations on how Facebook should deal with influential users, the Board emphasised that the reason for enhanced protection of political speech is not that politicians would enjoy more rights than other persons, but that the audience has the right to access their speech which is public information, because it might be relevant for members of the society, even if the content is objectionable. The higher level of speech protection comes with a higher level of responsibility. Both responsibility and protection root in the potentially stronger impact that politicans’ expressions can have. Facebook called this value “newsworthiness” which the Board did not object, but stressed that its application criteria should be clear and transparent.

The inconsistency of Facebook’s practice has been revealed by the Board through indirect references: “Facebook asserted that it has never applied the newsworthiness allowance to content posted by the Trump Facebook page or Instagram account”, and the company was required to clearly explain its users the “rationale, standards and processes (…), and report on the relative error rates of determinations made”.

The Board also pointed out that Facebook’s rules are scattered among several documents: the Terms of Services, the Community Standards, the Community Standard on Account Integrity and Authentic Identity, the Facebook Newsroom, and the Facebook Help Center, and called on Facebook to consolidate these codes to enable users to better understand its policies.

The Board went on with the balancing exercise: on the one hand, users should be informed about their previous violations, strikes and penalties, and the consequences that they may face for future violations. Similarly to a yellow-card-red-card system, this would allow users to adapt their behaviour and shows an attitude protective of speech. On the other hand, the Board also held – approving Facebook’s decision to remove the posts – that newsworthiness should not take priority when urgent action is needed to prevent significant harm, on the contrary: priority should be given to the quick review of the posts of highly influential users, in order to be able to remove them as quickly as possible.

In sum, in a polite language, the Board criticised Facebook’s content policy: for being scattered around many documents, for being unclear about actions and consequences, lacking transparency regarding decision-making processes, and added that these “contribute to perceptions that the company may be unduly influenced by political or commercial considerations.”

The detailed transparency recommendations, including those that expect transparency reports about restrictions, including reason, manner, broken down by region and country, are almost similar to the German Network Enforcement Act’s reporting obligations. Although, rather than submitting it to any authority, this report is to be included in its “transparency reporting”, which is a self-regulative tool applied by Facebook since 2013 biannually.

Interestingly, the Board did not even discuss whether the speech would have been protected under the First Amendment. This was not necessary, because private entities are not obliged to ensure First Amendment rights to other private persons (see below).

It would have been interesting to see whether the Board supports the opinion that Trump’s posts in question were contrary not only to the Facebook Standards, but also unprotected under the First Amendent, because they involved an incitement to violence, which was likely to lead to an imminent lawless action (Brandenburg v. Ohio, 395 U.S. 444). According to a more recent decision, imminence also means that the point in time may not be in “some indefinite future time” (Hess v. Indiana, 414 U.S. 105).[3] While most online hate speech is not concrete enough to fit this condition, the incriminate posts were removed specifically for this reason. That they were expressed by a political figure, only added to the circumstances why these could be considered threatening with a reason.

The horizontal effect of human rights law

Below, I will address the public-private nature of the decision making process of the Board. As mentioned above, several international human rights instruments were referenced, among others the International Covenant on Civil and Political Rights, and the UN Guiding Principles on Business and Human Rights (UNGP). While the former document, as a globally binding international treaty on human rights, is part of international customary law, the latter document deserves some explanation.

The United Nations issued these principles in 2011 to implement the UN’s “Protect, Respect and Remedy” Framework. The primary goal of the guiding principles is to extend the commitment to respect human rights on private corporations, and Facebook has declared its commitment to it in March 2021. This source is having increasing prevalence in the discourse about platforms’ roles and responsibilities towards users and society, where the pressing question is, whether platforms as private entities could be regarded as being obliged to respect human rights?

In other words, do human rights have Drittwirkung, a horizontal effect between private parties? The United States’ answer to this question is clearly negative, but the European response is more complicated. The European Convention on Human Rights expects signatories to ensure that individuals can enjoy their human rights. If it were observed that Facebook regularly violates human rights, states would have an obligation to stop and prevent those violations through their legislative means and investigate these cases. By spectacularly committing itself to respect human rights directly, Facebook hopes to forego such state regulation.

Some commentators have objected that the Board relied on international law, as Facebook is not a state, and international law was not designed to directly deal with content moderation of a private enterprise. This is true, but if Facebook has committed itself in its policy to abide by international human rights law, then it becomes part of its policy, which makes up its contractual terms.

Therefore, Facebook becomes bound by them, even if the horizontal effect of international human rights is disputed. Notably, this is true for contracts that have been concluded after the public commitment, or is otherwise subject to the rules of contract amendment. Besides, if a user would sue Facebook, the court would be obliged to apply these international legal standards provided that its nation has signed those treaties. Ultimately, if one of the parties would challenge the court decision in an international court, that would certainly apply these international law standards. This makes it justified and reasonable to call upon these standards and build them into the practice of Facebook’s content moderation from early on.

Law and self-regulation: what is the effect of the Board’s interference?

Questions may also emerge regarding the Board’s status as a private entity, which may review the terms of services of the contract between Facebook and its users. Whereas, none of the contracting parties should independently amend the contractual terms. Any changes to these terms may apply only after Facebook has communicated them and only for the future. This makes it all the more reasonable that the Board’s attention turned towards suggesting policies to improve the contractual terms for the future.

This underlines the fact that the Board can obviously not be regarded as an independent court, or arbitration tribunal, even though its members are independent. The Board was created by Facebook, the resources are granted by Facebook, so it is more like an expert advisory body of Facebook. Its decisions are not binding legal sources, although Facebook stipulated that it would be bound by them. This stipulation is, again, not enforceable. But still, the Board’s opinion and framing of the problems may be a reference point for future studies and policies. For example, one of the most important declarations in this decision could be that ”Facebook has become a virtually indispensable medium for political discourse, and especially so in election periods. It has a responsibility both to allow political expression and to avoid serious risks to other human rights.”

The independence of its members, the competence that it will show, the transparency of the decisions, and the decisions themselves are all factors determining its future success, provided that success for the Board means that it can have a long-term formative effect on the public perception of, and policy approaches to the role of social media platforms. If Facebook keeps to its promises, it can stabilise the Board’s reputation, which has a reverberation on Facebook. This mutual cooperation may define the future of Facebook.

Judit Bayer, PhD. Habil. Schumann Fellow, Institute for Information, Telekommunication and Media Law, WWU Münster

[1] Barendt, Eric: Freedom of Speech. Oxford University Press, 2005. p. 19-20., Meiklejohn, Alexander: Free Speech And Its Relation to Self-Government. The Lawbook Exchange, Clark, New Jersey, 2004. p. 88.

[2] Austin, J. (1975), “How To Do Things With Words.” Harvard University Press, 1975, Langton R (1993) Speech acts and unspeakable acts. Philos Public Aff 22:293–330.

[3] Amélie Heldt, Trump’s very own platform? Two scenarios and their legal implications, JuWissBlog Nr. 3/2021 v. 11.01.2021.