What Does Meta Mean For Online Advertising? | Bamboo NineOn 7 January 2025 Meta made sweeping changes to its policy on Community Standards – Hateful Conduct the (“Standards”). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (“the Act”) are in breach of their duties to prevent and protect these users from harm.   

In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called “it” and women to be referred to as property in user-to-user communications on Meta’s platforms such as Facebook and Instagram.

The changes themselves

The changes can be seen by looking at the 7 January 2025 Change Log on the Meta Hateful Conduct Policy website.  They include:

  • Deletion, under the heading of “dehumanizing speech” of references to women as “household objects or property or objects in general”, and to references to “Black people as farm equipment” and “transgender or non-binary people as “it“.  Such content is no longer classified under “Tier 1, “Do not Post”.
  • Express provision in these terms:  “We do allow content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs”.
  • Express provision  in these terms “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.””

The relevant provisions of the Online Safety Act 2023

So how do these changes sit given the framework of the Act?

The Act came in force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.

Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.

It is worth digging into the provisions of section 10(2), which places on all services:

(2) A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to—

(a)   prevent individuals from encountering priority illegal content by means of the service,

(b)   effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and

(c)   effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 9(5)(g)) (emphasis added)

Furthermore, section 10(3) states:

(3)  A duty to operate a service using proportionate systems and processes designed to—

(a)  minimise the length of time for which any priority illegal content is present;

(b)  where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

From these questions arise- what is “priority illegal content” and “priority offence”?

Priority illegal content is defined at section 59 of the Act:

(10)“Priority illegal content” means—

(a) terrorism content,

(b)   [Child sexual exploitation and abuse] content, and

(c) content that amounts to an offence specified in Schedule 7.

Schedule 7 lists various harassment offences.

Section 59 (7) defines a “Priority offence”  as—

(a) an offence specified in Schedule 5 (terrorism offences),

(b) an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse), or

(c) an offence specified in Schedule 7 (other priority offences).

The application of the provisions of the Online Safety Act 2023:

In other words, where harassment which meets a criminal threshold occurs, such as a course of conduct involving calling someone such hateful things as the Standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.

Indeed, the Sentencing Guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim including, sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.

However under its revised hateful conduct standards, Meta makes explicit provision that these statements are allowed on its platform.  It attempts to justify this “given political and religious discourse” in an LGBTQ+ context.

Being homosexual was declassified as a mental disorder by the World Health Organisation (“WHO”) in 1990 and in 2019 the WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.

Nevertheless, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill.

Section 10(2) is notably limited to take or use “proportionate measures”- in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider much face the consequences of breaching the Act.

My hope is that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.

Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta- under the Act companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.

In the UK Ofcom, which regulates this space, has said: “from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.”

Even though Meta is not based in the UK the Government’s Online Safety Act explainer makes it clear, as do the provisions of the Act:

“The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.”

The Draft Codes of Practice-

Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers.

In particular, for large or multi-risk services, it sets out the following recommendation:

The provider should have a code of conduct that sets standards and expectations for individuals working for the provider around protecting United Kingdom users from risks of illegal harm.

In changing its Standards as such, Meta has also rendered Instagram and Facebook in breach of the Code of Practice issued by Ofcom pursuant to the Act. It should be noted that whilst the Codes are recommended to be followed platforms can deviate from them but have to justify where they do so.

Suneet Sharma was previously a junior commercial lawyer, has written for INFORRM for a number of years and its currently retraining as a charitable governance professional. He continues to write, particularly on topics of media law and runs the Privacy Perspective Blog.