Earlier this summer, some of the world’s biggest tech, product and branding companies (including Google, Facebook, Unilever and Procter & Gamble) launched The Global Alliance for Responsible Media. This self-described ‘unprecedented’ alliance aims to tackle ‘dangerous, hateful, disruptive and fake content online’ which it says, if left unchecked, ‘risks threatening our global community’.

The Alliance appears to be driven by a mix of corporate social responsibility and self-preservation, as it explicitly aims to identify a ‘concrete set of actions, processes and protocols for protecting brands’. This is entirely understandable given the negative publicity received by companies who are perceived to enable online harms to proliferate. The Home Affairs select committee’s investigation into abuse, hate and extremism online suggested YouTube, Twitter and Facebook had ‘consciously’ failed to adequately address online harm, and the Cambridge Analytica scandal appears to have lowered public trust in all social media platforms. As Mark Zuckerberg has said, these companies are keen to take steps to strengthen their relationship with the public.

There is much to be celebrated about this Alliance – apart from anything else, it reflects a huge shift in attitudes towards hate speech, online harm and regulation of the Internet. It is also well-timed, coming when many governments are trying to improve regulation of the Internet. In the UK, the Department for Digital, Culture, Media and Sport is reviewing responses to the consultation on its ‘online harms’ White Paper, which covers a myriad of behaviours, including revenge porn, hate speech, misinformation, terrorism and the sale of illegal goods. The Alliance could devote some serious resources to addressing online harm and, more importantly, enable more collaboration between companies. At present, efforts to pool technical resources and tools have been very limited, with initiatives such as the Global Internet Forum to Counter Terrorism. The Alliance could address this issue. Going further, it could help with the development of consistent cross-industry policy responses. This would allow us to not only tackle harmful content when it appears on one platform but to also address how it, and its purveyors, move across the Internet.

So far, so good. Indeed, on paper this Alliance represents what a lot of activists working to counter online harms have long wanted; a broad coalition that works together spends serious money to protect individuals. What this vision means in practice (and whether it will be realised) is, at this point, up for debate as little detail has been provided. It’s unclear exactly what harms the Alliance is going to tackle, how it will do it, who will be responsible for implementation, how it will evaluate performance and who will implement sanctions (if any) for lack of action. The Alliance also faces some really difficult social and ethical challenges, which urgently need to be reflected on and addressed. No amount of money or technical sophistication will overcome these alone:

  1. Online harms are contextual. What is harmful in one place might not be considered harmful in another. All of the partner companies operate across multiple countries and will struggle to agree on a unified global response – just think about the hugely divergent responses to the Mohammed cartoon scandal in 2014. Most plausibly, the companies will only be able to agree a ‘minimum’ set of standards and values, such as agreeing to enforce the law in whichever jurisdictions they operate. Ultimately, the breadth of the partners may prove its undoing, as the Alliance struggles to agree on a position. This also links to the second issue;
  2. Identifying, categorising and detecting harmful content involves making lots of decisions, many of which can be contentious. For instance, at the extremes, it is easy to distinguish between terrorist and non-terrorist content. Where it becomes difficult is in the middle grey area and, sadly, this is where most online content lies. The Alliance will need to make some difficult decisions, which may be challenged by groups in society (most likely, free speech advocates) and may have political ramifications. It is unclear whether all of the partners will be willing to make these decisions, especially given that one of their primary goals is to protect their brands.
  3. Currently, the Alliance claims it wants to ‘improve the safety of online environments’. There is nothing wrong with framing harm in terms of ‘safety’, and such language can make difficult decisions more amenable to otherwise reluctant non-expert decision makers. Nonetheless, it shifts the focus towards individual actions rather than the structures which allow, enable and even encourage those actions to take place. This is a crucial issue given recent concerns that some platforms thrive on highly polarising, abusive and vitriolic content as they can motivate higher user engagement. The Alliance will need to decide whether it wants to address these issues or just focus on moderating specific bits of content.

The most encouraging part of the new Alliance is the nascent recognition that harmful online content poses a societal problem. It should not be dealt with by each platform independently – because their harmful effects are borne by all of us, we need a joined-up, integrated approach. And, notwithstanding the concerns raised here, any steps to challenge, constrain and remove harmful online content should be welcomed. But this Alliance will only effect real change if it tackles these difficult social issues head-on. We’ll have to see whether it will.

Bertie Vidgen, is a Research Associate at the Alan Turing Institute whose research focuses on detecting, analysing, and countering online hate speech in both news and social media

This post was originally published on the LSE Media Policy Project Blog and is reproduced with permission and thanks.