Almost a year ago Dipayan Ghosh of the Harvard Kennedy School of Government and I argued that the business models of Facebook and other Big Tech corporations unwittingly aligned them with the goals of bad social actors, and needed an urgent reset. Why? Because they prioritize platform traffic and ad revenue over and above any social costs.
Yet, in spite of a damning report by the US House of Representatives Judiciary Committee last October and multiple lawsuits and regulatory challenges in the US and Europe, the world is no nearer a solution. But for the case of Facebook, whistleblower Frances Haugen’s shocking Senate testimony last week confirmed exactly what we argued: that this large US-based corporation is “buying its profits with our safety”, because it consistently prioritizes its business model over correcting the significant social harms it knows it causes.
As Robert Reich notes, it would be naïve to believe that accountability will follow the public outcry. That’s not how the US works anymore, nor indeed many other democracies. Meanwhile Mark Zuckerberg’s response to the new revelations rang hollow. Of course, he is right that levels and forms of political polarization vary across the countries where Facebook is used. But no one ever claimed that Facebook caused the forces of political polarization, which inevitably are variable, only that for its own benefit it recklessly amplified them.
Nor, as Zuckerberg rightly protests, does Facebook “set out to build products that make people angry or depressed”: why would they? But the charge is more specific: that Facebook configured its products to maximize the measurable “engagement” that drives its advertising profits. Facebook’s 2018 newsfeed algorithm adjustment, cited by Haugen, was a key example. Yet we know from independent research that falsehoods travel faster, more deeply and more widely than truths. In other words, falsehoods generate more “engagement”. So, optimizing for “engagement” automatically optimizes for falsehoods too.
It is not good enough for Facebook now, under huge pressure, to claim credit for the “reforms” and “research” it conducted in earlier attempts to mollify an increasingly hostile public. Facebook can say, as Mark Zuckerberg just did, that “when it comes to young people’s health or well-being, every negative experience matters”, but its business model says otherwise, and on a planetary scale. It is time for that business model to be examined in the harsh light of day.
The problem with the underlying business model
In a report published a year ago, Dipayan Ghosh and I called this model the “business internet”. Its core dynamics are by no means unique to Facebook, but let’s concentrate there. The business internet is what results when the vast space of online interaction becomes managed principally for profit. It has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms that holds the user’s attention to the exclusion of rivals. A business model such as Facebook’s is designed to maximize the profitable flow of content across its platforms.
If this sounds fine on the face of it, remember that the model treats all content producers and content the same, regardless of their moral worth. So, as Facebook’s engineers focus on maximizing content traffic by whatever means, disinformation operators – wherever they are, provided they want to maximize their traffic – find their goals magically aligned with those of Facebook. All they have to do is circulate more falsehoods.
Facebook will no doubt say it is doing what it can to fix those falsehoods: many platforms have tried the same, even at the cost of damping down the traffic that is their lifeblood. But the problem is the underlying business model, not the remedial measures, even if (which many doubt) they are well-intentioned. It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.
The scale of the problem is staggering. As recent Wall Street Journal articles detail, Facebook’s business model (and obsession with controlling short-term PR costs) push it to connive when celebrities post content that even Facebook’s rules normally ban, discount the impacts on teen girls’ self-esteem from Instagram’s image culture, misunderstand the consequences for political information when it tweaks its newsfeed algorithm, and fail in its own drive to encourage Covid vaccine take-up.
Some Facebook staff seem to believe that the Facebook information machine has become too large to control.
Yet even so, we can easily underestimate the scale of the problem. We may dub Instagram the ‘online equivalent of the high-school cafeteria’, as the Wall Street Journal does, but what school cafeteria ever came with a continuously updated and universally accessible archive of everything anyone said there? The problem is that societies have delegated to Facebook and other Big Tech companies the right to re-engineer how social interaction operates – in accordance with their own economic interests and without restrictions on scale or depth. And now we are counting the cost.
A turning point?
But thanks to Frances Haugen, through her Senate testimony and her role in the Wall Street Journal revelations, society’s decision-point has become startlingly clear. Regulators and governments, civil society and individual citizens could consign the problem to the too-hard-to-solve pile, accept Facebook will never fully fix it, and allow the residual toxic waste (inevitable by-product of Facebook’s production process) to do whatever harm it can to society’s and democracy’s fabric. Or key actors in various nations could decide that the time for coordinated action has come.
Assuming things proceed down the latter, less passive path, three things require urgent action.
- Facebook should be compelled by regulators and governments to reveal the full workings of its business model, and everything it knows about their consequences for social and political life. Faced with clear evidence of major social pollution, the public cannot be expected to rely on the self-motivated revelations of Facebook’s management and their engineers working under the hood.
- Based on the results of that fuller information, regulators should consider the means they have to require fundamental change in that business model, on the basis that its toxicity is endemic and not merely accidental. If they currently lack adequate means to intervene, regulators should demand extended powers.
- Equally urgent action is needed to reduce the scale on which Facebook is able to engineer social life, and so wreak havoc according to its whim. At the very least, the demerger of WhatsApp and Instagram must be put on the table by the US FTC. But a wider debate is also needed about whether societies really need platforms on the scale of Facebook to provide the connections on which social life undoubtedly depends. The time has passed when citizens should accept being lectured by Mark Zuckerberg on why they need Facebook to “stay in touch”. More comprehensive breakup proposals may follow from that debate. Meanwhile, analogous versions of the “business internet”, in Google and elsewhere, also need to be examined closely for their social externalities.
Some fear that the medicine of regulatory reform will be worse than the disease. As if the poisoning of democratic debate, the corrupting of public health knowledge in a global pandemic, and the corrosion of young people’s self-esteem, to name just some of the harms, were minor issues that could be hedged.
Something like these risks was noted at the beginnings of the computer age, when in 1948 one of its founders, Norbert Wiener, argued that with “the modern ultra-rapid computing machine . . . we were in the presence of [a] social potentiality of unheard-of importance for good and for evil”.
Nearly 75 years later, Wiener’s predictions are starting to be realized in plain sight. Are we really prepared to go on turning a blind eye?
Nick Couldry is Professor of Media, Communications and Social Theory in the Department of Media and Communications at LSE.
This post originally on the LSE Media Policy Project blog and is reproduced with permission and thank