The International Forum for Responsible Media Blog

DCMS Report: Time to regulate social media? – Zoe McCallum

Last month the Commons DCMS Committee on Disinformation and Fake News [pdf] recommended a new regulator for tech companies and an enhanced role for the ICO, paid for by a levy on tech giants.

These recommendations were highly political, made against a background of public trust in big tech declining in the wake of Cambridge Analytica and other data breach, disinformation and election interference scandals. So why the push for regulation – and are the recommendations sensible?

The push for towards tougher regulation

The Report quotes Margot James MP (Digital Minister for DCMS – very involved in the White Paper soon to be announced) lambasting tech companies for failure to act swiftly to remove harmful online content.

There have been no fewer than fifteen voluntary codes of practice agreed with platforms since 2008. Where we are now is an absolute indictment of a system that has relied far too little on the rule of law.

The criticisms of social media companies driving the regulatory push are not limited to their role in the proliferation of fake news and in election day scandals. They are:

  • That social media is associated with loss of attention, mental health issues, “confusions over personal relationships” (whatever that means – this?), which especially affect children and vulnerable adults [15];
  • These problems are exacerbated by microtargeted advertising and messaging, which “play[s] on and distort[s] people’s negative views of themselves and others”: [16]
  • That algorithms (e.g. Facebook newsfeed) prioritise negative stories because they are driven by advertising profits and bad news is “shared” more widely than good [19];
  • That algorithms also contain inherent gender, racial and other biases: [20] (see also the reference in the interim report to Facebook’s failure to control vitriol inciting hatred against Rohingya muslisms in Burma: [77]-[83]);
  • More widely, the companies’ focus on the bottom line leads to prioritising advertising over the protection of human rights concerns, including privacy: [26];
  • That user profiles created by social media companies are sold in targeted political campaigning. Voters do not want “the same model that sells us holidays and shoes and cars” to spread into the political arena: [45]

Dragging feet / hiding under the table

Whilst tech companies can (and do) work to tackle these issues, the Report criticises them for not doing it fast enough – and for disclaiming liability in respect of some “online harms”. Specifically, it claims:

  • Social media companies “hide behind a claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites”: [14]; see also the interim report, [51]-[60
  • Facebook is not working “actively or urgently” enough to tackling inherent biases in its algorithms, has an opaque management structure and “seems willing neither to be regulated nor scrutinized”. [20]; [29]-[30].

The Report’s recommendations

 To tackle these “online harms” the Committee recommends:

  • A compulsory code of ethics, developed by technical experts setting out what constitutes harmful content ([37]-[38]);
  • An independent regulator with statutory powers to launch legal proceedings against companies who breach the code – with “large fines” for non-compliance: [39]. The Report does not go so far as to suggest a regulator, though it references Ofcom’s acceptance of “synergies” between Ofcom’s work and areas for regulation online: [48];
  • Statutory powers for the regulator to obtain from the tech companies information on what data they on an individual user and access to tech companies’ security mechanisms and algorithms: [40].
  • An enhanced role for the ICO, so that its powers extend to profiles created about by social media companies about their users (described as “inferred data”), paid for by a levy on those companies: [41]-[48]

Comment: Are the recommendations sensible?

The last recommendation concerning the ICO is straightforwardly based on a misconception (see [42]). As ICO Guidance makes clear, the definition of “personal data” in the GDPR/DPA is already wide enough to cover profiling by social media companies. The Committee was right, however, to describe the ICO as underfunded relative to the task in hand.

Contrary to the report, “technical experts” are no more qualified to develop a code of practice for social media companies than TV camera operators are to develop the Broadcasters’ code. Nonetheless, there is (relatively speaking) an emerging consensus around forms of online harm. Social media companies have admitted on occasion that the scale of some tasks was too difficult to tackle e.g. in relation to incitement to hatred in Burma.

As to whether self-regulation is preferable to independent regulation, comparison to IPSO vs the Royal Charter system is tempting but unhelpful. That is because social media companies “select” rather than “generate” content. Via algorithms they wield enormous power to decide what of the endless internet we consume via newsfeed. But it is the power of a gatekeeper, not the writer – and an inference with the power to curate a newsfeed is not in the same league as an interference with journalistic freedom.

There are analogies to broadcast regulation. Historically, one purported justification for censoring broadcasters more heavily than print media (e.g. the watershed) was the sheer power of moving images to invoke a reaction. Social media newsfeeds are full of moving images and arguably more compelling than TV – because they involve interactivity. If the rationale still held for regulating broadcasters, it would carry over to the internet. But the Broadcasters’ Code is part-product of a historical hangover: When only analogue transmission was possible, analogue spectrum scarcity was the rationale for imposing onerous requirements relating to impartiality in TV news and election coverage. Now there is proliferation of channels, internet competes with TV and there are multiple technological solutions to control access to both by vulnerable users. Heavy-handed regulation is no longer justified – even though it persists.

It would be sensible (and cost-effective) to adopt a unified approach to regulating broadcast and social media – and take the opportunity to overhaul the broadcast system. That solution is unlikely to be palatable to tech companies but preferable to a German-style NetzDG law. I doubt that the government or Ofcom have the appetite – but via the White Paper, we’ll find out soon.

Zoe McCallum is a barrister at Matrix Chambers, practising in the field of media and information law.

This post originally appeared on the Matrix Media and Information law website and is reproduced with permission and thanks

1 Comment

  1. Winston Smith

    You must absolutely prevent the spreading of fake news and thought crime on Social Media. A dedicated regulator is necessary for this extremely important task, or else populist and anti-government views will spread among the gullible population that does not know what is good for itself. You should call the new body the “Ministry of Truth”, that sounds very trustworthy and fact-oriented.

Leave a Reply

© 2024 Inforrm's Blog

Theme by Anders NorénUp ↑

Discover more from Inforrm's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading