The issue of AI-generated fake case citations, commonly referred to as ‘AI hallucinations,’ gained notable attention through the combined High Court cases of Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin). However, it may be less widely appreciated that AI hallucinations have already been causing significant issues internationally for some time, particularly in the US. But it’s not just fake cases; AI hallucinations can take various forms. The Natural and Artificial Intelligence in Law Blog (‘NAIL Blog’), which explores the intersection between AI and law, has identified eight of the most common types of AI hallucination affecting the legal profession and lawyers should ensure they are acquainted with each one.

Early cases and the international spread

The problem first came to prominence in the US case of Mata v Avianca 22-cv-1461 (PKC) SDNY 2023, which was extensively discussed among legal professionals. Approximately six months later, the UK’s First-tier Tribunal Tax Chamber encountered a similar incident in Harber v HMRC [2023] UKFTT 1007 (TC), where a litigant provided nine fictitious AI-generated decisions. This tribunal endorsed Judge Kastel’s warnings from Mata, emphasising the ‘many harms that flow’ from AI hallucinations, including wasted resources, damage to judicial reputation, the undermining of authentic judicial precedents and increased cynicism towards the legal system.

Since then, international incidents involving AI hallucinations in court proceedings have increasingly appeared on blogs and social media, attracting significant interest from legal professionals observing the rapid spread across jurisdictions.

It was against this backdrop that the High Court in Ayinde emphasised the need for lawyers to clearly understand their responsibilities in the evolving context of AI. While the facts of Ayinde are well known and need not be repeated here, the court specifically considered suspected instances of generative AI producing fictitious case law, fabricated citations, and inaccurate statements of law. The court acknowledged AI’s powerful benefits in litigation (both criminal and civil) but stressed the significant associated risks. The administration of justice, the court explained, depends absolutely on the integrity of practitioners. The court emphasised lawyers’ duty to verify all materials, equivalent to checking research conducted by trainees. Surprisingly for some, this duty extends further, placing responsibility on those in leadership positions (such as heads of chambers) and regulatory bodies to implement practical and effective oversight measures. The court stated:

‘For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.’

Regrettably, AI hallucinations have continued to increase internationally, with over 50 reported incidents in July 2025 alone. Also, there have been at least two reported instances of judges relying on false cases/principles in their judgments (AI-suspected, but not confirmed).

In the UK, at the time of writing, there have been two notable incidents of AI hallucinations following the decision of Ayinde:

  1. A UKIPO trade mark appeal (BL O/0559/25) on 20 June 2025, where both a litigant-in-person (LIP) and a trade mark attorney relied on AI-generated inaccuracies. The tribunal issued warnings to both, highlighting the attorney’s breach as particularly severe.
  2. HMRC v Gunnarsson [2025] UKUT 00247 (TCC) on 23 July 2025, involving an LIP who unintentionally relied on AI-generated submissions. The tribunal acknowledged lower culpability due to the LIP’s lack of legal expertise, unfamiliarity with professional standards, and competing pressures. Consequently, no sanction was imposed.

Judicial guidance and responsibilities

The senior judiciary in England and Wales has been addressing AI use for some time. In February 2025, the Master of the Rolls emphasised that:

‘[lawyers and judges] will have to [embrace AI] – and there are very good reasons why they should do so – albeit cautiously and responsibly…’

He noted that the England and Wales approach is more permissive compared to jurisdictions like Australia (NSW), where procedural rules explicitly prohibit certain AI uses such as AI-generated evidential content.

The principal UK guidance appears in the Judicial Office Holders’ publication, Artificial Intelligence (AI): Guidance for Judicial Office Holders (14 April 2025). This document differentiates responsibilities between lawyers and LIPs.

Lawyers have a professional duty to ensure submissions are accurate. They generally do not need to disclose AI use ‘provided AI is used responsibly’, but until fully familiar with AI tools, lawyers may need reminders to ‘independently verify the accuracy’ of AI-generated research or citations. For LIPs, the guidance acknowledges that AI chatbots may be the ‘only source of legal advice’ for some litigants who often lack the skills to verify their accuracy. Judges should therefore enquire if AI has been used, confirm what ‘accuracy checks (if any)’ were done and remind LIPs they remain ‘responsible for all material’ submitted. Judges should also be alert to the use of AI to produce ‘fake materials’, including ‘deepfake technology’.

Recently, Lord Justice Birss indicated in a keynote speech that similar guidance might soon be necessary for expert witnesses. Speaking about ongoing work by the Civil Justice Council, he stated:

‘we also have a separate strand of work now under way looking whether there need to be rules relating to the use of AI in the preparation of court documents – like pleadings, witness statements and expert’s reports…’

Amendments to procedural rules may follow shortly.

Guidance from professional regulators

In January 2024, the Bar Council published Considerations when using ChatGPT and generative artificial intelligence software based on large language models. This guidance highlights significant ethical concerns posed by convincing yet false AI-generated content, warning barristers explicitly not to trust such systems blindly. Inadvertent misleading of the court constitutes incompetence and gross negligence, breaches core duties, may result in disciplinary proceedings, professional negligence claims, defamation or data protection issues.

Similar warnings were reiterated by the Law Society in its 20 May 2025 document, Generative AI: the essentials which includes a detailed checklist for practitioners contemplating AI usage.

Ongoing risks and future of AI in the legal system

At present, in the UK at least, the consistent message is clear: lawyers must embrace AI cautiously and responsibly. While AI can be a valuable litigation tool, careless adoption poses severe threats to the justice system, warranting strict penalties.

However, upon continued examination of the international position through the legal trackers on the NAIL Blog,* several serious concerns arise:

  1. Despite increased awareness among legal professionals regarding the issue, AI-generated hallucinations continue to infiltrate court cases through submissions made by both qualified lawyers and LIPs at an alarming rate.
  2. With some judges now adopting AI assistance, judicial decisions have sometimes displayed inaccuracies indicative of AI-induced hallucinations. The dangers here parallel those posed when lawyers erroneously introduce false citations, threatening the reliability and integrity of legal decisions.
  3. Reported cases provide only partial insight into the problem. Concerns persist regarding courts such as County and Magistrates’ Courts, where decisions are infrequently reported and anecdotal evidence suggests the occurrence of AI hallucinations is significantly higher.
  4. Well-intentioned judges often cite hallucinated cases and their erroneous legal principles in full within official judgments to show the extent of the problem to those reading. However, judges may be inadvertently exacerbating the issue because those AI-generated inaccuracies are being integrated into the established legal canon indirectly.
  5. Given that AI usage within the legal profession will continue to grow, heightened vigilance is essential to safeguard the integrity of the justice system. Should this troubling trend persist, a fundamental reassessment of the legal framework may become necessary. The traditional system, which we strive diligently to preserve, may ultimately become unworkable.


References and further reading

Professional guidance

The Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance.

The Bar Council, Considerations when using ChatGPT and generative artificial intelligence software based on large language models.

The Law Society, Generative AI: the essentials.

Natural and Artificial Intelligence in Law Blog (‘NAIL Blog’)

‘The 8 Most Common Types of AI Hallucinations in Case Law’

7 Key Insights: Lord Justice Birss considers AI in civil justice – what expert witnesses and housing lawyers must know’

AI Case Tracker: legal trackers on AI worldwide’.

Judicial guidance on artificial intelligence: England & Wales v New South Wales (Australia)’.

Well-trained, experienced attorneys who work at a large, high-functioning, well-regarded law firm rely on fabricated legal authority – Johnson v Dunn’.

* AI case trackers

The NAIL Blog hosts live case trackers documenting how AI is reshaping legal systems around the world, including: the AI Hallucination Case Tracker which monitors publicly reported cases involving AI hallucinations presented in court; the Judicial AI Use Tracker which documents how judges utilise AI; and the Government AI Hallucination Tracker which tracks instances where governments may have relied upon AI-generated hallucinations). See the trackers here.