Perhaps the most critical question for Google’s lawyers when receiving a deluge of new take-down/ blocking requests will be when the data processing complained of is unlawful within the EU data protection regime and when Google has the requisite knowledge of such unlawful processing.
The processing of the data by Google search (not necessarily the original publisher) must be contrary to the data protection principles established by the Data Protection Directive (“DPD”), or more accurately the legislation that implements the DPD in the relevant member state of the European Union.
So, for example, a UK citizen would need to demonstrate a breach of the Data Protection Act 1998 by reference to the data protection principles set out in schedule 1 of the Act. Each European Member State has implemented its own data protection laws based on the DPD.
But what exactly is that threshold for the level of sensitivity of the personal data before its processing by a search engine is unlawful? Many observers, particularly those steeped in First Amendment law in the United States, would be forgiven for thinking there is barely any threshold at all. But a closer look at the CJEU’s reasoning is required.
The key phrase in the judgment is at paragraph 92, which provides the rather limited guidance (largely parroting the DPD) that the data will be unlawfully processed when it is:
- “inadequate, irrelevant or excessive in relation to the purposes of the processing”;
- “not kept up to date”, or
- “kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes”.
For a landmark judgment, it is hard to imagine a more opaque set of guiding principles. In the context of search engines, a whole barrage of questions emerge:
1. When is data irrelevant and by what standard is relevance judged? Is this some kind of public interest test or can it be relevant to a small group of people? Presumably, as per the judgment, relevance fades over time but how long does it take for historical records to become irrelevant?
When is the data “no longer” relevant? So far we have one data point: 16 years for a repossession notice is too old. But how can we apply these vague principles to other situations?
2. What does ‘excessive’ mean? Does it mean that ten Google search results containing the same personal data should be treated differently from a single search result? If data was true at the time of publication (for example, that an individual has a serious illness) does it become ‘out of date’ when the facts change (for example, the illness is cured)? Does its position in the Google search rankings matter? It is interesting that the CJEU pointed out the impact that a search result could have on an individual’s privacy, noting that it can often give the information much more prominence than if it were merely left to the third-party website on which it is published (see below). One may take from this that its position in the search rankings would matter, but the court has not quite said as much.
3. When is data required for statistical purposes? Google does not exist for statistical purposes. It simply indexes data and statistics stored on other websites so would this exception be relevant to Google? How will such statistics be found if not through search engines?
We could go on and on.
Google’s ability to collate data is also key. The CJEU stated at para 80:
“It must be pointed out at the outset that … processing of personal data, such as that at issue in the main proceedings, carried out by the operator of a search engine is liable to affect significantly the fundamental rights to privacy and to the protection of personal data when the search by means of that engine is carried out on the basis of an individual’s name, since that processing enables any internet user to obtain through the list of results a structured overview of the information relating to that individual that can be found on the internet – information which potentially concerns a vast number of aspects of his private life and which, without the search engine, could not have been interconnected or could have been only with great difficulty – and thereby to establish a more or less detailed profile of him. Furthermore, the effect of the interference with those rights of the data subject is heightened on account of the important role played by the internet and search engines in modern society, which render the information contained in such a list of results ubiquitous.”
Whilst unhelpful to Google, this passage will at least be seized upon by other website operators and platforms (and Google’s other services) to argue that the decision applies only to search engines and their unique indexing qualities and coverage across the web. But we will have to wait and see just how narrowly it will be interpreted and whether websites with search functions will also be caught by these broad principles.
A more fundamental question being asked is whether it is right at all that Google and other search engines should have to take the role of judge and jury in determining the answers to these questions.
In this respect, it is important to consider how the threshold for ‘unlawful data processing’ sits alongside the safe-harbour ‘mere conduit’, ‘caching’ and ‘hosting’ defences provided for by Articles 12, 13, and 14 of the E-Commerce Directive.
In the English case of Metropolitan Schools v Google ( WLR 1743), Google was classified as a ‘mere conduit’ in relation to its search results, even having been notified of unlawful (libellous) content being returned, and so was not liable for it. This decision was not too far behind the protection that would be afforded to Google in the US by virtue of section 230 of the Communication Decency Act 2000.
However, following Google Spain, regardless of that decision in relation to Google as an intermediary search engine it can now be liable as a data controller – effectively as a primary publisher.
What we can be sure of is that individuals will be looking at their Google take-down requests based on grounds of defamation, malicious falsehood, copyright, passing off, breach of confidence, and breach of privacy to see whether they can be recast as claims for unlawful data processing. And that will be happening in 28 European jurisdictions.
Ashley Hurst is a partner at Olswang specialising in particular in media and internet-related disputes.
This is an extract from “Google Spain and the right to be forgotten: what next?“, a collection of articles about the Google Spain decision published by Olswang.
Surprisingly, perhaps, for a lawyer, Ashley Hurst does not seem to be familiar with the phrase “all the circumstances of the case”.
The supposedly difficult questions he poses are all to be approached with that well-established principle in mind.
The vast majority of cases will be easy. The private information which reveals the dishonesty of an elected politician at one extreme or the pictures of a couple “making out” in private at the other, are not very difficult.
The same applies to the “relevance” of information: it all depends on the circumstances and usually the answer will be self-evident.
It’s only when the arguments for privacy and those for publication are evenly balanced that it is at all problematic. Then the ECJ allows an individual to approach his or her DPA for a ruling if the search engine refuses to delete.
No doubt Google will soon have an algorithm to deal with all but the most difficult cases. And not before time, any fair-minded person would say.
Once the search engines apply the principles laid down by the ECJ, what appears on some obscure site no longer matters. And a major site could be required to observe the ruling.
Finally, the First Amendment is not unlimited as the US Supreme Court has repeatedly made clear. And it is primarily concerned with the free expression of opinion, not protecting the endless repetition of hurtful and irrelevant information. Sooner or later, the USA may well follow the ECJ in bringing the law into the internet age.