The Cambridge Analytica scandal of March 2018 changed the status of Facebook forever. The revelation that a political consultancy had illicitly gained access to the data of millions of Facebook users forced the company to change its approach to privacy, including its rules and algorithms.
The scandal also started a debate about the influence of social media on democracy. This led to several measures to increase the transparency of political campaigning on Facebook and to limit foreign interference in elections.
Eighteen months on and Facebook is under fire again for refusing to more strictly police or even ban political ads. Campaigners are warning the site will have a negative effect on upcoming elections in the UK and Sri Lanka. Yet while many of the changes Facebook has made since 2016 are superficial, there’s good reason to believe the actual threat to democratic processes is limited.
Facebook’s “pivot to privacy” has included increased user control over their data and algorithmic changes that prioritise posts from users’ personal connections over news and public pages. There are also tougher rules for external developers who want to plug their applications into the platform and researchers who want to evaluate collected data.
But several investigations have shown the site still gathers extensive information about its users that continues to leak out through third parties. There is also a lack of transparency and scrutiny over exactly what data the company holds. This makes a joke of Mark Zuckerberg’s new motto, “the future is private”.
One of the most publicised changes was the creation of a searchable political advertisement database, recording every ad posted, who funded it and some data on who it was shown to. While this Ad Library is a laudable idea, the information provided is limited or unreliable.
Some political issue ads aren’t included and some organisations have found ways to obscure their ads’ origins. The library also doesn’t tell you what groups an ad was trying to reach or how it may have been targeted at and viewed by people in different electoral constituencies.
The company has also ramped up its use of fact-checkers, now partnering with over 50 organisations around the world. But these only review news stories posted to the site, not political ads, and the process has been criticised for not being transparent or detailed enough to really be useful.
However, when it comes to problems that lend themselves to technical solutions, Facebook has been effective. Investigations have shown how Russia used fake and automated (bot) accounts to influence American public opinion during the 2016 US election campaign. In response, Facebook developed a system that uses artificial intelligence to remove 99% of all fake accounts, as soon as they are created.
More generally, research has found that fears of foreign interference have been overblown. In the few proven instances of foreign interference, there isn’t sufficient evidence that they swayed voters one way or the other. It’s not even been shown that the Cambridge Analytica scandal, which involved companies in the US, Canada and the UK, made a decisive impact on the 2016 US election or the Brexit vote, despite alleged links to both.
Of course, political manipulation can still come from within a country, which is why people are calling on Facebook to ban or at least fact-check political ads, worried that they will spread false information. But voters are often already saturated with political information of varying accuracy from other sources.
In the UK, much of the media is highly partisan, especially on the matter of European integration, and many voters are already consuming “propaganda” from TV, radio, newspapers and news websites. The willingness of journalists to report politicians’ lies, even while highlighting inaccuracies, means it’s still just as easy and cheap to use press releases, news conferences and public statements as Facebook ads to spread misleading or fake news.
One thing Facebook does provide is a way to send out highly targeted ads that allow campaigns to say one thing to some voters and another, perhaps even the opposite, to other voters. One of the claims about Cambridge Analytica’s work was that it used psychological profiling and microtargeted ads to manipulate a relatively small number of crucial voters. (Although, again, it’s unclear that this can make a decisive difference to political campaigns.)
To combat the negative effects of microtargeting, several watchdog organisations, such as the campaign Who Targets Me, track and expose online political ads and potential manipulations. This reduces the incentive for campaigns to use them, because it risks exposing parties that send different, perhaps even contradictory, messages to different groups of voters. These parties would lose authenticity, the ultimate currency in digital political communication. On top of this, Facebook is now considering limiting political microtargeting.
There’s still a debate to be had about Facebook’s responsibility to police political messages and fake news. But the actual threat of interference and manipulation in campaigns, such as the upcoming UK election, is rather limited. We should be more concerned about the continued lack of privacy and transparency from Facebook’s data-gathering activities. Without significant new regulation or a mass exodus of users, neither of which look likely any time soon, Facebook’s data-based business model means it’s unlikely to radically reform itself in these areas.