As I observed in Part I of this article, no UK court has yet issued a judgment in a libel or defamation claim concerning AI-generated content, but several cases and legal actions are emerging and the issue is widely anticipated to reach the courts soon. Proceedings are emerging in other jurisdictions in the US (see Part I) and in Australia.
Belfast- based libel lawyer Paul Tweed is reportedly preparing a group action in the UK against technology providers (including OpenAI, Meta, Google, and Amazon) alleging that their AI chatbots and other AI-generated content breach defamation and privacy laws. The 2013 Defamation Act provides for certain protection for internet intermediaries —specifically the statutory defences found in Section 5. Under this section operators of websites hosting user-generated content may enjoy immunity from suit when they comply with regulations after being notified of defamatory material. Social media platforms or hosts are generally not liable under UK law unless they have knowledge, control, or refuse to act upon notice of defamatory content. Claims must typically be directed at the original author, and intermediary platform liability arises mainly if the author is unidentifiable or unreachable.
This proposed group action will argue that generative AI material produced by the likes of ChatGPT is new material that falls outside of this immunity. Tweed is looking at three alleged grounds to bring an action: defamation by AI chatbots; unauthorised use of works for training AI models; and the creation by AI of fake biographies that he says are being sold by the likes of Amazon. In his letter to the Northern Ireland Affairs Committee (February 2025) Mr Tweed asserted that there have been several serious examples of false allegations and misinformation appearing on a number of the generative AI platforms and chatbots, including “particularly troubling instances” where leading figures from academia and the law have been wrongly accused of serious misconduct.
The industry refers to these incorrect and misleading responses as “hallucinations”. When challenged on the inaccuracies, the AI platforms often maintain the position and attempt to justify the allegations by citing newspaper articles which simply do not exist.
Mr Tweed refers to the case of US Professor Jonathan Turley, whom ChatGPT claimed had been accused of sexual harassment after inappropriate conduct with students on a trip to Alaska. Chat GPT cited a 2018 Washington Post article, which never existed.
Another case concerned a German journalist, Martin Bernklau, who was described by Microsoft’s AI tool, Copilot, as being involved in several very serious forms of criminality. In fact, these alleged criminal actions related to court cases which Mr Bernklau had reported on in the course of his employment as a journalist. Copilot had conflated his journalistic reporting with the facts of the cases and described him as the perpetrator of the crimes he had reported on.
Current UK law does not provide AI-specific rights or remedies for libel, but claimants may rely on the same legal framework that applies to traditional defamation, so long as they can prove publication, reference, and serious harm to reputation. In this country. publishers of defamatory AI-generated statements—such as the companies operating these platforms— can in principle be sued under the Defamation Act 2013 if their output causes serious harm to a person’s reputation.
Both AI text and deepfake image/video content are treated as potentially actionable if the claimant can satisfy the threshold of serious harm and identify the publisher.
Another route available to potential claimants is provided by data protection laws (the UK GDPR and Data Protection Act 2018). The DPA and regulations allow individuals to address false or inaccurate personal data produced by AI platforms, which may lead to the material being removed or at least rectified.
At an EU level, the Digital Services Act 2000/31/EC PE (2020) 825 final (Dec. 15, 2020), Article 1) is a Regulation which is directly concerned with addressing the risks and challenges associated with content moderation, freedom of expression and the spread of disinformation or other forms of harmful speech online. The regulation became fully applicable across the EU in early 2024, and whilst not part of UK law, will be within the scope of any company that is active in both UK and EU jurisdictions. In other words the DSA directly affects any UK company with an EU-facing digital presence and demands compliance for continued EU market access. However it has not been tested in court for its application to defamatory harms caused by AI Large Language Models.
English defamation law sets a high threshold for serious harm and makes certain intermediaries less likely to be liable, but operators and publishers of AI are not protected in the same way as mere hosts of user content.
The Defamation Act 2013 applies to AI publishers in the UK much like it does to human authors, but with some important nuances regarding liability and defences:
If an AI system publishes defamatory content that causes or is likely to cause serious harm to someone’s reputation, the operator or publisher of that content could be liable under section 1 of the Defamation Act 2013, provided the legal requirements for defamation are met.
The claimant must show:
- The publication refers to them (i.e. they must be identifiable)
- The statement was communicated to a third party
- Serious harm to reputation resulted or is likely to result (following the Lachaux v Independent Print Ltd UKSC 27 standard)
Section 10 of the Act generally protects intermediaries (such as website hosts or platforms) from liability unless it is not reasonably practicable to sue the author, editor, or publisher.
Section 5 and accompanying regulations provide a notice-and-takedown regime: if a website promptly removes defamatory content after notification, it is shielded from liability as an operator.
However, if the AI publisher is the actual operator (e.g., the platform deploying the AI system for content generation), and it exercises editorial or publishing control, it may be treated as a primary publisher and could face direct liability.
AI publishers may invoke statutory defences under the Act, such as truth, honest opinion, or publication on a matter of public interest. But for reasons set out in Part I of this post, where the AI output is a fabricated or false statement (like a deepfake), these defences are unlikely to succeed. This is because a non-human AI cannot prove subjective belief in truth or honest opinion..
In summary, the Defamation Act 2013 only provides a limited immunity for companies facing libel suits for AI generated content. Liabilty may be focussed on those who have editorial control and knowledge, and it may be that a company running a Large Language Model trained AI system disclaims such knowledge, but courts are unlikely to leave a claimant without a remedy if he or she can otherwise establish a meritorious case under section 1 of the Defamation Act. As Mr Tweed, the Belfast libel lawyer mentioned above, has stated in his letter to Parliament,
in my view, specific and immediate legislative intervention is required in order to pre-empt the potential for widespread damage to reputations.
This post originally appeared on the UK Human Rights Blog and on its Substack and is reproduced with permission and thanks


Leave a Reply