In a recent decision of the English High Court in Metropolitan International Schools Ltd v DesignTechnica Corporation [“Metropolitan International“] (see: case), Justice Eady held that Google, the popular search engine, was not liable for defamatory material that appeared in its search results. More generally, Metropolitan International supports a common law principle that a person cannot be liable for conveying defamatory material to users if the conveyance lacks human intervention.
The claim against Google was brought by Metropolitan International Schools [“Metropolitan”], a company that offers distance learning courses. The litigation arose after an anonymous person posted a message on a website suggesting that Metropolitan’s distance learning courses were a “scam”. To make matters worse, the message began ranking highly in Google search results for searches conducted for Metropolitan. Metropolitan reacted by suing the website operator and Google for publishing the defamatory comments in its search results.
The issue of interest in this case was whether Google was a “publisher” of the defamatory material and thereby liable to Metropolitan. The court began by drawing an interesting distinction between a human compiler of a library catalogue and the automated indexing process employed by Google. The later could not be a “publisher” – Justice Eady concluded – because the search results were generated automatically and without human involvement. Accordingly, Google lacked the requisite knowledge and involvement sufficient to constitute publication [at para. 50 and 53]:
When a search is carried out by a web user via the Google search engine it is clear, from what I have said already about its function, that there is no human input from [Google]. None of its officers or employees takes any part in the search. It is performed automatically in accordance with computer programmes.
[W]hereas a compiler of a conventional library catalogue will consciously at some point have chosen the wording of any “snippet” or summary included, that is not so in the case of a search engine. There will have been no intervention on the part of any human agent. It has all been done by the web-crawling “robots”.
The court was careful to note that Google, although not liable as a publisher, might be liable on the basis of acquiescence if it permitted the publication to appear in its search results despite having the power to prevent it. On a survey of the facts, the court concluded that Google had acted to restrict access to the message and was thereby not liable on the basis of acquiescence.
This judgment has interesting implications for website operators that exercise human intervention in the process of conveying information to users. The court’s distinction between processes which involve human intervention and those which consist of automated processes suggest that online entities might decrease their legal exposure by automating processes that involve the conveyance of information to users. One may question whether this “incentive to automate” might result in an increase in incidences of internet defamation accompanied by a decrease in the precision and relevance of information conveyed to users.
Cited by Forbes.
Pingback: Section 230 to the rescue, again « Defamation Law Blog