*/
The issue of AI-generated fake case citations, commonly referred to as ‘AI hallucinations,’ gained notable attention through the combined High Court cases of Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin). However, it may be less widely appreciated that AI hallucinations have already been causing significant issues internationally for some time, particularly in the US. But it’s not just fake cases; AI hallucinations can take various forms. The Natural and Artificial Intelligence in Law Blog (‘NAIL Blog’), which explores the intersection between AI and law, has identified eight of the most common types of AI hallucination affecting the legal profession and lawyers should ensure they are acquainted with each one.
The problem first came to prominence in the US case of Mata v Avianca 22-cv-1461 (PKC) SDNY 2023, which was extensively discussed among legal professionals. Approximately six months later, the UK’s First-tier Tribunal Tax Chamber encountered a similar incident in Harber v HMRC [2023] UKFTT 1007 (TC), where a litigant provided nine fictitious AI-generated decisions. This tribunal endorsed Judge Kastel’s warnings from Mata, emphasising the ‘many harms that flow’ from AI hallucinations, including wasted resources, damage to judicial reputation, the undermining of authentic judicial precedents and increased cynicism towards the legal system.
Since then, international incidents involving AI hallucinations in court proceedings have increasingly appeared on blogs and social media, attracting significant interest from legal professionals observing the rapid spread across jurisdictions.
It was against this backdrop that the High Court in Ayinde emphasised the need for lawyers to clearly understand their responsibilities in the evolving context of AI. While the facts of Ayinde are well known and need not be repeated here, the court specifically considered suspected instances of generative AI producing fictitious case law, fabricated citations, and inaccurate statements of law. The court acknowledged AI’s powerful benefits in litigation (both criminal and civil) but stressed the significant associated risks. The administration of justice, the court explained, depends absolutely on the integrity of practitioners. The court emphasised lawyers’ duty to verify all materials, equivalent to checking research conducted by trainees. Surprisingly for some, this duty extends further, placing responsibility on those in leadership positions (such as heads of chambers) and regulatory bodies to implement practical and effective oversight measures. The court stated:
‘For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.’
Regrettably, AI hallucinations have continued to increase internationally, with over 50 reported incidents in July 2025 alone. Also, there have been at least two reported instances of judges relying on false cases/principles in their judgments (AI-suspected, but not confirmed).
In the UK, at the time of writing, there have been two notable incidents of AI hallucinations following the decision of Ayinde:
The senior judiciary in England and Wales has been addressing AI use for some time. In February 2025, the Master of the Rolls emphasised that:
‘[lawyers and judges] will have to [embrace AI] – and there are very good reasons why they should do so – albeit cautiously and responsibly…’
He noted that the England and Wales approach is more permissive compared to jurisdictions like Australia (NSW), where procedural rules explicitly prohibit certain AI uses such as AI-generated evidential content.
The principal UK guidance appears in the Judicial Office Holders’ publication, Artificial Intelligence (AI): Guidance for Judicial Office Holders (14 April 2025). This document differentiates responsibilities between lawyers and LIPs.
Lawyers have a professional duty to ensure submissions are accurate. They generally do not need to disclose AI use ‘provided AI is used responsibly’, but until fully familiar with AI tools, lawyers may need reminders to ‘independently verify the accuracy’ of AI-generated research or citations. For LIPs, the guidance acknowledges that AI chatbots may be the ‘only source of legal advice’ for some litigants who often lack the skills to verify their accuracy. Judges should therefore enquire if AI has been used, confirm what ‘accuracy checks (if any)’ were done and remind LIPs they remain ‘responsible for all material’ submitted. Judges should also be alert to the use of AI to produce ‘fake materials’, including ‘deepfake technology’.
Recently, Lord Justice Birss indicated in a keynote speech that similar guidance might soon be necessary for expert witnesses. Speaking about ongoing work by the Civil Justice Council, he stated:
‘we also have a separate strand of work now under way looking whether there need to be rules relating to the use of AI in the preparation of court documents – like pleadings, witness statements and expert’s reports…’
Amendments to procedural rules may follow shortly.
In January 2024, the Bar Council published Considerations when using ChatGPT and generative artificial intelligence software based on large language models. This guidance highlights significant ethical concerns posed by convincing yet false AI-generated content, warning barristers explicitly not to trust such systems blindly. Inadvertent misleading of the court constitutes incompetence and gross negligence, breaches core duties, may result in disciplinary proceedings, professional negligence claims, defamation or data protection issues.
Similar warnings were reiterated by the Law Society in its 20 May 2025 document, Generative AI: the essentials which includes a detailed checklist for practitioners contemplating AI usage.
At present, in the UK at least, the consistent message is clear: lawyers must embrace AI cautiously and responsibly. While AI can be a valuable litigation tool, careless adoption poses severe threats to the justice system, warranting strict penalties.
However, upon continued examination of the international position through the legal trackers on the NAIL Blog,* several serious concerns arise:
The Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance.
The Bar Council, Considerations when using ChatGPT and generative artificial intelligence software based on large language models.
The Law Society, Generative AI: the essentials.
‘The 8 Most Common Types of AI Hallucinations in Case Law’
‘AI Case Tracker: legal trackers on AI worldwide’.
‘Judicial guidance on artificial intelligence: England & Wales v New South Wales (Australia)’.
The NAIL Blog hosts live case trackers documenting how AI is reshaping legal systems around the world, including: the AI Hallucination Case Tracker which monitors publicly reported cases involving AI hallucinations presented in court; the Judicial AI Use Tracker which documents how judges utilise AI; and the Government AI Hallucination Tracker which tracks instances where governments may have relied upon AI-generated hallucinations). See the trackers here.
The issue of AI-generated fake case citations, commonly referred to as ‘AI hallucinations,’ gained notable attention through the combined High Court cases of Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin). However, it may be less widely appreciated that AI hallucinations have already been causing significant issues internationally for some time, particularly in the US. But it’s not just fake cases; AI hallucinations can take various forms. The Natural and Artificial Intelligence in Law Blog (‘NAIL Blog’), which explores the intersection between AI and law, has identified eight of the most common types of AI hallucination affecting the legal profession and lawyers should ensure they are acquainted with each one.
The problem first came to prominence in the US case of Mata v Avianca 22-cv-1461 (PKC) SDNY 2023, which was extensively discussed among legal professionals. Approximately six months later, the UK’s First-tier Tribunal Tax Chamber encountered a similar incident in Harber v HMRC [2023] UKFTT 1007 (TC), where a litigant provided nine fictitious AI-generated decisions. This tribunal endorsed Judge Kastel’s warnings from Mata, emphasising the ‘many harms that flow’ from AI hallucinations, including wasted resources, damage to judicial reputation, the undermining of authentic judicial precedents and increased cynicism towards the legal system.
Since then, international incidents involving AI hallucinations in court proceedings have increasingly appeared on blogs and social media, attracting significant interest from legal professionals observing the rapid spread across jurisdictions.
It was against this backdrop that the High Court in Ayinde emphasised the need for lawyers to clearly understand their responsibilities in the evolving context of AI. While the facts of Ayinde are well known and need not be repeated here, the court specifically considered suspected instances of generative AI producing fictitious case law, fabricated citations, and inaccurate statements of law. The court acknowledged AI’s powerful benefits in litigation (both criminal and civil) but stressed the significant associated risks. The administration of justice, the court explained, depends absolutely on the integrity of practitioners. The court emphasised lawyers’ duty to verify all materials, equivalent to checking research conducted by trainees. Surprisingly for some, this duty extends further, placing responsibility on those in leadership positions (such as heads of chambers) and regulatory bodies to implement practical and effective oversight measures. The court stated:
‘For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.’
Regrettably, AI hallucinations have continued to increase internationally, with over 50 reported incidents in July 2025 alone. Also, there have been at least two reported instances of judges relying on false cases/principles in their judgments (AI-suspected, but not confirmed).
In the UK, at the time of writing, there have been two notable incidents of AI hallucinations following the decision of Ayinde:
The senior judiciary in England and Wales has been addressing AI use for some time. In February 2025, the Master of the Rolls emphasised that:
‘[lawyers and judges] will have to [embrace AI] – and there are very good reasons why they should do so – albeit cautiously and responsibly…’
He noted that the England and Wales approach is more permissive compared to jurisdictions like Australia (NSW), where procedural rules explicitly prohibit certain AI uses such as AI-generated evidential content.
The principal UK guidance appears in the Judicial Office Holders’ publication, Artificial Intelligence (AI): Guidance for Judicial Office Holders (14 April 2025). This document differentiates responsibilities between lawyers and LIPs.
Lawyers have a professional duty to ensure submissions are accurate. They generally do not need to disclose AI use ‘provided AI is used responsibly’, but until fully familiar with AI tools, lawyers may need reminders to ‘independently verify the accuracy’ of AI-generated research or citations. For LIPs, the guidance acknowledges that AI chatbots may be the ‘only source of legal advice’ for some litigants who often lack the skills to verify their accuracy. Judges should therefore enquire if AI has been used, confirm what ‘accuracy checks (if any)’ were done and remind LIPs they remain ‘responsible for all material’ submitted. Judges should also be alert to the use of AI to produce ‘fake materials’, including ‘deepfake technology’.
Recently, Lord Justice Birss indicated in a keynote speech that similar guidance might soon be necessary for expert witnesses. Speaking about ongoing work by the Civil Justice Council, he stated:
‘we also have a separate strand of work now under way looking whether there need to be rules relating to the use of AI in the preparation of court documents – like pleadings, witness statements and expert’s reports…’
Amendments to procedural rules may follow shortly.
In January 2024, the Bar Council published Considerations when using ChatGPT and generative artificial intelligence software based on large language models. This guidance highlights significant ethical concerns posed by convincing yet false AI-generated content, warning barristers explicitly not to trust such systems blindly. Inadvertent misleading of the court constitutes incompetence and gross negligence, breaches core duties, may result in disciplinary proceedings, professional negligence claims, defamation or data protection issues.
Similar warnings were reiterated by the Law Society in its 20 May 2025 document, Generative AI: the essentials which includes a detailed checklist for practitioners contemplating AI usage.
At present, in the UK at least, the consistent message is clear: lawyers must embrace AI cautiously and responsibly. While AI can be a valuable litigation tool, careless adoption poses severe threats to the justice system, warranting strict penalties.
However, upon continued examination of the international position through the legal trackers on the NAIL Blog,* several serious concerns arise:
The Courts and Tribunals Judiciary, Artificial Intelligence (AI) – Judicial Guidance.
The Bar Council, Considerations when using ChatGPT and generative artificial intelligence software based on large language models.
The Law Society, Generative AI: the essentials.
‘The 8 Most Common Types of AI Hallucinations in Case Law’
‘AI Case Tracker: legal trackers on AI worldwide’.
‘Judicial guidance on artificial intelligence: England & Wales v New South Wales (Australia)’.
The NAIL Blog hosts live case trackers documenting how AI is reshaping legal systems around the world, including: the AI Hallucination Case Tracker which monitors publicly reported cases involving AI hallucinations presented in court; the Judicial AI Use Tracker which documents how judges utilise AI; and the Government AI Hallucination Tracker which tracks instances where governments may have relied upon AI-generated hallucinations). See the trackers here.
Chair of the Bar sets out a busy calendar for the rest of the year
By Louise Crush of Westgate Wealth Management
Examined by Marie Law, Director of Toxicology at AlphaBiolabs
Time is precious for barristers. Every moment spent chasing paperwork, organising diaries, or managing admin is time taken away from what matters most: preparation, advocacy and your clients. That’s where Eden Assistants step in
AlphaBiolabs has announced its latest Giving Back donation to RAY Ceredigion, a grassroots West Wales charity that provides play, learning and community opportunities for families across Ceredigion County
Rachel Davenport, Co-founder and Director at AlphaBiolabs, outlines why barristers, solicitors, judges, social workers and local authorities across the UK trust AlphaBiolabs for court-admissible testing
Through small but meaningful efforts, we can restore the sense of collegiality that has been so sorely eroded, says Baldip Singh
Come in with your eyes open, but don’t let fear cloud the prospect. A view from practice by John Dove
Looking to develop a specialist practice? Mariya Peykova discusses the benefits of secondments and her placement at the Information Commissioner’s Office
Anon Academic explains why he’s leaving the world of English literature for the Bar – after all, the two are not as far apart as they may first seem...
Review by Stephen Cragg KC