Ashley HurstThis two part post looks at why the Data Protection Act 1998 (the “DPA”), as derived from the Data Protection Directive 1995 (the “DP Directive”) has suddenly, over 25 years since its enactment, become the weapon of choice for reputation managers and in doing so created a thorny new set of problems, particularly for internet intermediaries.

As the European Data Protection Regulation looms, my central premise is that some clarity and simplicity is desperately needed to avoid a chilling effect on freedom of expression, a stifling of innovation, and an online world where there is no quick and easy take-down solution for the individuals who genuinely need it.

A new world of reputation management

The background to this post is that the market for reputation management services has suddenly woken up to the power of the DPA. The inch thick hard copy version sits on a shelf above my desk and still fills me with a sense of dread every time I take it down. Few clauses in it can be read without numerous cross-references to definitions and schedules and separating the wood from the trees is extremely difficult for the untrained eye. It is therefore little wonder that for so many years the DPA has often been ignored as an internet take-down tool in favour of more exciting causes of action such as libel and Article 8 privacy rights.

Until recently, there was also little appetite from judges to engage with claims under the DPA when there was a perfectly good claim under another cause of action.  For example, in Quinton v Peirce & Cooper ([2009] EWHC 912 (QB)), where a DPA claim was added to libel and malicious falsehood claims, Eady J said:

I must now turn to the Data Protection Act. I am by no means persuaded that it is necessary or proportionate to interpret the scope of this statute so as to afford a set of parallel remedies when damaging information has been published about someone, but which is neither defamatory nor malicious. Nothing was cited to support such a far ranging proposition, whether from debate in the legislature or from subsequent judicial dicta.[87]

But then along came Costeja v Google Inc and Google Spain ([2014] 1 QB 1022)  (“Costeja”) and the scales started to fall from the eyes of claimant reputation management lawyers and others providing online reputation services in the UK and across Europe. It has suddenly dawned on them that publishing words on the internet amounts to “data processing” and that any person (including a corporate one based in California) who can exert control over that process (such as having an ability to remove the words from the website in question) could be classified as a “data controller”, provided that person falls within the wide ambit of European and domestic implementing legislation by virtue of that person having some kind of physical presence or equipment in the relevant jurisdiction (Therefore falling within Article 4 of the DP Directive).

Provided these criteria are satisfied (not a straightforward matter for data controllers based outside Europe), data protection actually becomes relatively simple compared to Article 8 privacy and defamation laws. It does not require lengthy debate about such terms as “reasonable expectation of privacy”, “public domain”, honest opinion”, “serious harm” and so on, particularly if the data being processed is simply inaccurate as opposed to being out of date or irrelevant. Indeed, the DPA becomes so simple that according to one legal commentator in an Independent article headed‘Ambulance chasers’ cashing in on internet’s ‘right to be forgotten’”, there is no need to instruct lawyers at all to make take-down requests of Google.

The result is that individuals who are being defamed online now have a potential short cut.  Instead of sending a take-down request to the website operator (or a rare “section 5 notice” under the Defamation Act 2013) they can instead simply ensure that the article in question is more difficult to find by asking Google to remove links to the article whenever the individual’s name is searched.  In an age where most of us use search engines to find internet material, whether on our desktops or mobile devices, this is an effective remedy, even if the article can still be found by searching the website directly or following a hyperlink.

However, when these so-called “right to be forgotten” or “right to erasure” remedies are compared with libel and Article 8 remedies, all is not quite as simple as it may seem.  Aside from the difficulties that Google and other internet intermediaries are having in determining what is inaccurate, out of date and irrelevant, the reported English cases since Costeja, most notably Daniel Hegglin v Google Inc ([2014] EWHC 2808 (QB)) and Max Mosley v Google Inc and Google Limited ([2015 ] EWHC 59 (QB)) have highlighted a very significant tension between data protection law and the law of defamation and other content laws (including Article 8 privacy and copyright laws). In particular, the remedies available and the jurisdictional scope of data protection law need to be carefully examined against the so-called “safe harbour” defences available to internet intermediaries (or more accurately “information society service providers” as defined by the E-Commerce Directive 2000 (2000/31/EC) (the “E-Commerce Directive)).

Specifically, there is considerable confusion as to when an internet intermediary may be liable in damages for the publication of unlawful content that it facilitates and what steps it can be ordered to take by way of an injunction to prevent such publication.  In this post I will consider damages claims, comparing libel and privacy laws against the newly revived data protection laws.  In part two I will look at injunctions.

Damages claims against internet intermediaries

The first hurdle for a claimant to clear in establishing liability against an internet intermediary under English law is to show that it is a “common law publisher”.  This is an archaic concept that has developed over many years of libel law and is now unnecessarily complicated.  The courts have got themselves in a real muddle trying to work out when an internet intermediary is a “mere conduit” and therefore not a common law publisher at all (See Bunt v Tilley [2006] EWHC 407 (QB)) (this applies to broadband providers, for example) and when it becomes more than a mere conduit on the basis that it becomes aware that it is facilitating the publication of unlawful material but refuses to prevent further publication despite being able to do so without blocking an entire website or internet connection.

Most of the cases on this point involve Google, whose liability varies depending on whether it is acting in its capacity as a host or a search engine.  Google successfully argued in Metropolitan Schools v Designtechnica and Google ([2011] 1 WLR 1743) that its services as a search engine are passive in nature and therefore within the “mere conduit” definition and outside the scope of a “common law publisher”. However, contrary to a wide misconception, the Metropolitan Schools case did not decide that Google could never be liable as a search engine for defamatory material appearing in its search results. It simply found that Google was not liable for the automated generation of snippets in that particular case.

It is worth recalling exactly what Eady J said:

A search engine, however, is a different kind of Internet intermediary. It is not possible to draw a complete analogy with a website host. One cannot merely press a button to ensure that the offending words will never reappear on a Google search snippet: there is no control over the search terms typed in by future users. If the words are thrown up in response to a future search, it would by no means follow that the Third Defendant has authorised or acquiesced in that process [55].

There are some steps that the Third Defendant can take and they have been explored in evidence in the context of what has been described as its “take down” policy. There is a degree of international recognition that the operators of search engines should put in place such a system (which could obviously either be on a voluntary basis or put upon a statutory footing) to take account of legitimate complaints about legally objectionable material. It is by no means easy to arrive at an overall conclusion that is satisfactory from all points of view. In particular, the material may be objectionable under the domestic law of one jurisdiction while being regarded as legitimate in others [56].

In this case, the evidence shows that Google has taken steps to ensure that certain identified URLs are blocked, in the sense that when web-crawling takes place, the content of such URLs will not be displayed in response to Google searches carried out This has now happened in relation to the “scam” material on many occasions. But I am told that the Third Defendant needs to have specific URLs identified and is not in a position to put in place a more effective block on the specific words complained of without, at the same time, blocking a huge amount of other material which might contain some of the individual words comprising the offending snippet [57]. 

It may well be that the Third Defendant’s “notice and take down” procedure has not operated as rapidly as Mr Browne and his client would wish, but it does not follow as a matter of law that between notification and “take down” the Third Defendant becomes or remains liable as a publisher of the offending material. While efforts are being made to achieve a “take down” in relation a particular URL, it is hardly possible to fix the Third Defendant with liability on the basis of authorisation, approval or acquiescence.” [58]

Mr Justice Eady’s judgment left open the possibility of fixing Google with liability for damages in circumstances it does not make any effort to assist with the blocking or URLs from its search results.

Indeed, it can be argued that Google’s position as a search engine is not too dissimilar to when it is performing its role as a host, as it does for YouTube and, in relation to which the Court of Appeal decided in Tamiz v Google ([2013] EWCA Civ 68) that whilst Google was not a common publisher at the time of publication, it could be once on notice of the unlawful material.

These cases have left unanswered questions as to how long an internet intermediary has to respond to a take-down request and what efforts it needs to expend in relation to the blocking of a specific URL before it can be said to acquiesce in the publication and therefore be liable in damages.  However, the Northern Irish case of CG v Facebook Ireland Limited & McCloskey ([2015] NIQB 11), in which Facebook were held liable in damages for breach of privacy for failure to remove a Facebook page which contained abusive and identifying information about a convicted sex offender, may give some indication of the direction of travel.  It should be noted that a claim against Facebook under the DPA failed on the basis that on the evidence the Claimant could not establish that Facebook Ireland was established in the UK by virtue of its connection with its UK subsidiary but this point is the subject of an appeal.

In relation to Article 8 claims, it has not yet been determined whether the “common law publisher” tag is a pre-requisite for claim for damages against an internet intermediary as it is with libel claims, but it very likely to be the case (see CG v Facebook).

The safe harbour provisions

Once a libel claimant has established that the internet intermediary is a common law publisher and the other main ingredients of the claim are met, consideration then turns to the defences available.

Given the virtual immunity provided to internet intermediaries by section 230 of the Communications Decency Act 1996, it is baffling to many First Amendment lawyers in the US that an internet service provider (whether a search engine, blog platform, hosting services company or social media platform) can ever be liable in damages for material posted on the internet by third parties.  However, despite the original success of Laurence Godfrey against Demon Internet in 1996 (Godfrey v Demon Internet [2001] QB 201), successful claims for damages against internet intermediaries, if any, have been few and far between.  The reason for that is the robust set of defences provided by the E-Commerce Directive.

The safe harbour provisions in Articles 12, 13 and 14 of the E-Commerce Directive 2000 provide protection against damages claims (not injunctions) for:

(1)          “Mere conduit” – where services provided by the internet intermediary consist of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network (Article 12);

(2)          “Caching”- where the internet intermediary’s service consists of the automatic, intermediate and temporary storage of that information, performed for the sole purpose of making more efficient the information’s onward transmission to other recipients of the service upon their request (Article 13); and

(3)          “Hosting”- where an information society service is provided that consists of the storage of information provided by a recipient of the service (Article 14).

These defences were designed to protect internet intermediaries who perform technical, automatic and passive roles. This is made clear in Recital 42 which states that the E-Commerce Directive offers protection where:

“…the activity of the information society service provider is limited to the technical process of operating and giving access to a communication network … for the sole purpose of making the transmission more efficient; this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.”

Article 12 provides “mere conduits” with virtual immunity from damages. In practice, it is only really relevant to intellectual property claims as “mere conduits” are not “common law publishers” in libel and Article 8 privacy claims (See British Telecom plc and The Culture Secretary [2011] 3 CMLR 5 at [102] and the Court of Appeal judgment at [2012] EWCA Civ 232, at [53]).

Article 13, which applies to search engines such as Google, provides a wider defence to damages claims.  Assuming various other criteria are satisfied (for example, the cache provider does not modify the information) it is not lost when the search engine receives notice of unlawful material to which it is linking but only when it receives notice that the material it is linking to has been removed from the internet (Article 13(1)(e)).

Article 14 provides a defence to the intermediary hosting provider against a claim for damages in circumstances where “(a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or (b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.” Google’s blogging service, would benefit from this defence in circumstances where it acted quickly to remove unlawful material upon notice of it.

Each of these safe harbour provisions has made its way into the national law of 28 member states in slightly different ways. The English equivalents are Regulations 17, 18, and 19 of the E-Commerce (EC Directive) Regulations 2002 (the “E-Commerce Regulations”).

Notwithstanding certain regional variations across Europe, the key to the applicability of the safe harbour defences across all European jurisdictions is establishing the neutrality of the service provider. Thus, in Google France, Google inc v Louis Vuitton Malletier (Joined Cases C-236/08 to C-238/08) the process of returning sponsored links in exchange for purchased adwords was still a technical and automatic process and in L’Oreal v eBay (C‑324/09) the Court held that eBay certainly could be an information society service but only where it confines itself to providing that service neutrally by a merely technical and automatic processing of the data provided by its customers.  The court acknowledged that a provider will be denied entitlement to the hosting defence if a “diligent economic operator” should have identified the illegality in question and acted in accordance with expeditious removal or disablement.

It is important to stress that here the courts were concerned with damages.  The question of injunctions (as further explored below) was expressly considered in Recital (45) of the E-Commerce Directive which states:

“(45) The limitations of the liability of intermediary service providers established in this Directive do not affect the possibility of injunctions of different kinds; such injunctions can in particular consist of orders by courts or administrative authorities requiring the termination or prevention of any infringement, including the removal of illegal information or the disabling of access to it.”

The question of whether an internet intermediary is a “common law publisher” is not relevant to data protection claims, which are governed exclusively by the DP Directive and in the UK by the DPA.  Instead, the key question is whether the internet intermediary can be classified as a “data controller”, as Google Inc. was found to be in Costeja in relation to its role as a search engine.

But what about the safe harbour defences? Well, it might come as a surprise to some that whilst these defences apply to a wide variety of claims for defamation, breach of privacy, copyright and trade mark infringement, they do not on a strict reading of the E-Commerce Directive appear to apply to data protection claims. This was very specifically considered in recital (14) to the E-Commerce Directive:

“(14) The protection of individuals with regard to the processing of personal data is solely governed by Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data and Directive 97/66/EC of the European Parliament and of the Council of 15 December 1997 concerning the processing of personal data and the protection of privacy in the telecommunications sector which are fully applicable to information society services; these Directives already establish a Community legal framework in the field of personal data and therefore it is not necessary to cover this issue in this Directive in order to ensure the smooth functioning of the internal market, in particular the free movement of personal data between Member States; the implementation and application of this Directive should be made in full compliance with the principles relating to the protection of personal data, in particular as regards unsolicited commercial communication and the liability of intermediaries; this Directive cannot prevent the anonymous use of open networks such as the Internet.”

Indeed in the UK implementing legislation, matters covered by the DP Directive were specifically excluded by Regulation 3(1)(b) of the E-Commerce Regulation.

This leads to the surprising result that, regardless of notice to Google, if the unlawful data processing by Google is sufficient to cause the data subject damage under section 13(1) of the DPA (which in Vidal-Hall v Google Inc [2015] EWCA Civ 311) has now been held by the Court of Appeal to include non-financial loss), the data subject can recover damages from Google unless Google can show (under section 13(3)) that it took “such care as in all the circumstances was reasonably required” to comply with the DPA.

There is is no express requirement in section 13 that the data controller has actual knowledge of the unlawful nature of the data processing. For example, if the data controller failed to adopt adequate security measures to prevent a hacker publishing unlawful material, it might be liable regardless of whether it had knowledge of the inaccuracy of the data.

And so Google’s new online “Costeja” tool for making requests for personal data to be blocked from its search results may help it to comply with notices to prevent processing made under section 10 of the DPA, but it does not necessarily help Google to avoid claims for damages for having processed the data unlawfully in the first place.

In cases where damages are claimed in addition to a take-down remedy, it may become important to consider what is “reasonably required” of Google for the purposes of the defence under section 13(3) and the extent to which knowledge is relevant to that question.

Ashley Hurst is a partner at Olswang specialising in particular in media and internet-related disputes.