Within a week of one another came two reports on the future of journalism in an age of big tech: The Cairncross Review (on 12 Feb) and The Commons DCMS Committee Final Report into Fake News and Disinformation (14 Feb – but with its excoriating criticism of Facebook, anything but a Valentine to Mark Zuckerberg).
The Cairncross Review (pp.25-30) highlights the changing way that news is accessed: 74% of adults now consume it principally online, rather than via radio or print. Among 16-24 year olds, the figure jumps to 91%. Online platforms are the main way traffic is funnelled to traditional news websites (mainly Google and Facebook – but also Twitter and Apple News). On these content aggregators, traditional news competes with other online-only news sources: HuffPost, Independent, Buzzed, politico.com. On social media, news also competes with friends’ updates, advertising and other “clickbait”.
This consumption of disaggregated news via social media highlights the power of the gatekeeper, Newsfeed, in “ranking” content and thereby effectively choosing what users see and read. Cairncross (p.29) attempts to describe how Facebook’s algorithm works: It prioritises posts which the company has calculated will “spark conversations” (read: keep us hooked). This is assessed by “signals” which inform “predictions”, including the popularity of a post amongst the user’s close Facebook friends and the type of content to which a user is normally responsive. By contrast, Apple News (used by 10.9 million adults in the UK) employs “human editors” in addition to algorithms, which curate content by deciding which article should come at the top and which of different publishers offer the “best” story.
These explanations throw up more questions than answers. How much personal information do the companies use in putting together their feeds? One “signal” Facebook uses is whether content is likely to be clickbait/a hoax/”fake news” – but if the commercial incentive is to maximise use of Facebook, does Facebook Newsfeed filter these in or out? What about Facebook’s algorithm for adverts? On that, the DCMS Report accuses Facebook of actively blocking the work of organisations (such as Who Targets Me) seeking to improve transparency around targeted advertising (p85). Neither report mentions Twitter’s “notifications” feed but the same applies there.
Both reports recommended th.at tech giants should be forced to include a “quality mark” for their content as a regulatory obligation: DCMS Report, , Cairncross, p.95. Cairncross sensibly rejects the proposal that online platforms should be obliged to give prominence to “high quality” news because of difficulties around defining that standard. It doesn’t mention difficulties around auditing it too – but on this, the political history of regulating the mainstream press that should make any policymaker highly wary indeed. Instead, Cairncross recommends a lesser obligation on the larger online platforms to “improve how their users understand the origin of an article of news and the trustworthiness of its source” (p95), to be audited by a regulator. The DCMS Report goes further and recommends regulatory obligations of transparency around how the algorithms themselves work, including the source and purpose behind Newsfeed content (pp.85-88).
Neither Report is prescriptive about how transparency is to be achieved, leaving it to tech giants to develop their own tools. This is telling because (particularly where fake news is concerned) that practical task is by far the hardest. Can an algorithm be programmed to detect disinformation? Effective tools to date have required human input. In 2011, for example, the Media Standards Trust developed a site called “churnalism”, which allowed its users to monitor what percentage of a news story was copy-pasted from a press release (later closed for lack of funds). More recently, NewsGuard developed news “nutrition labels” which explain each website’s history, ownership, financing and transparency (example here). It has been integrated by Microsoft into its Edge Browser – but Newsguard’s use of humans, not algorithms suggests that a scale-up of its work to cover all dodgy websites would be very costly.
Either way, the two reports push in one direction: Greater regulation of tech giants and an end to their open-ended discretion in selecting Newsfeed content. The much bigger subject of regulating tech giants – the trans-border politics of it, its effect on privacy and free speech and the roles envisaged for the ICO and Ofcom by the DCMS Committee Report – will be discussed in a future post.
Zoe McCallum is a member of Matrix Chambers, practising the field of media and information law.
This post was originally published on the Matrix Media site and is reproduced with permission and thanks.