*/
Discussions around artificial intelligence (AI) in legal practice are often polarised. Some lawyers believe AI is the future of the profession, set to transform everything from legal research to drafting and case strategy. Others see it as unpredictable, risky and dangerous.
As with most technological shifts, the truth lies somewhere in between. AI is neither a magic wand nor an existential threat – it is a tool. Whether that tool is helpful or harmful depends on how it is used.
AI is evolving rapidly, with new tools emerging daily – each bringing fresh opportunities and concerns. Among the many issues though, two stand out as particularly pressing for legal practitioners: confidentiality and accuracy.
A key concern in legal settings is the potential for confidential or privileged information to be inadvertently shared with an AI tool. Uploading documents or entering prompts without understanding where data is stored or how it might be used can lead to serious breaches.
Just as your professional obligations (and the General Data Protection Regulation) should make you think twice before uploading a confidential document to an online filesharing service without due diligence, you should be cautious about what data you share with any AI tool. In particular, you need to understand its data handling practices, storage locations and privacy terms.
A recent matter underscores this issue. New Scientist, via a Freedom of Information (FOI) request, obtained ChatGPT records linked to Peter Kyle, the UK Minister for Technology. Among the (benign) queries – such as requests to define ‘antimatter’ and ‘quantum’ – lay a broader lesson: even public officials’ interactions with AI can become part of the public record. However, the response from the Department for Science, Innovation and Technology was limited somewhat by the FOI Act, which only requires the government to disclose information it holds.
For lawyers, the burden is arguably higher. Under the GDPR, clients can make Data Subject Access Requests, which may compel lawyers to disclose not only what information they hold, but also how and where AI was used to process it and who else it may have been disclosed to.
While you might, subject to your records and/or an AI tool’s history settings, be able to advise a client what information of theirs you processed with an AI tool, other questions may prove more challenging, such as:
These aren’t merely theoretical questions – they strike at the heart of client trust and regulatory compliance.
Another core issue is accuracy. Generative AI tools can produce fabricated but plausible-sounding outputs – a phenomenon known as hallucination.
Two recent UK cases illustrate the risks:
In both, legal representatives relied on fictitious case citations generated by AI in their submissions. In Ayinde, Mr Justice Ritchie reminded practitioners that ensuring the accuracy of pleadings is their professional responsibility. In Bandla, Mr Justice Fordham imposed a wasted costs order on both the barrister and the solicitors involved.
These rulings reaffirm that while AI can assist, it cannot be relied upon blindly.
AI is not explicitly regulated for legal practice but existing professional rules remain highly relevant.
The Bar Council has issued guidance on the use of generative AI at the Bar (due for an update but still relevant) that highlights the need for barristers to:
The Bar Standards Board (BSB) echoes these points, recommending that barristers think critically about how they leverage AI tools (‘ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice’, BSB, Blog, October 2023).
Addressing confidentiality concerns may mean restricting yourself to in-house tools or those with contractual guarantees over data processing.
Even then, developments aimed at reducing hallucinations (such as ‘Retrieval Augmented Generation’) could result in elements of your input spilling into unanticipated domains via background searches conducted to improve output accuracy. You should therefore check all the settings in the tool and speak to your IT advisers to find out how confidentiality concerns are addressed.
Thereafter, there is a lot you can do to integrate AI into legal workflows more safely and effectively.
I use a structured five-step structured approach – one I liken to ordering a pizza. This focuses on specifying what I need with precision. (After all, only a brave soul walks into a pizza restaurant and says: ‘One pizza, please.’ Besides a blank/sarcastic look, who knows what you’d get!)
Set expectations: ‘I need a draft termination clause for a services contract. But let me give you some specific requirements first.’
(Or: ‘I want a pizza. But let me give you some specifics first.’)
Clarify the scope of the task. For example, that the clause should allow termination for convenience and be governed by English law.
(Or: ‘The pizza is for two people.’)
Add detail that might shape the output, such as it being a rolling monthly contract and that the term should favour the service provider.
(Or: ‘I love olives.’)
Critically assess whether the output meets your requirements and professional standards.
(Or check: ‘Is it a pizza?’)
Elaborate each step as needed and reflect on how the tool responds. If at any time it becomes apparent that the tool has misunderstood, correct it before proceeding.
Steps 4 and 5 require real engagement. To support this process, I recommend five additional techniques to help validate and refine the tool’s output:
At each step, see what new authorities and responses are returned by the tool and use these to inform your own judgement, keeping in mind that you cannot delegate legal responsibility to an algorithm.
As highlighted in the BSB’s recent report Technology and Innovation at the Bar, emerging tools like automated drafting, document review, AI-based research and blockchain offer opportunities for transforming legal services – but also carry risks. Lawyers must therefore engage with AI – not by blindly adopting or rejecting it, but by approaching it critically, intelligently and responsibly.
Guidance on generative AI for the Bar, Bar Council, January 2024.
‘ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice’, Bar Standards Board, Blog, October 2023.
Technology and Innovation at the Bar, Research Report for the Bar Standards Board, Spinnaker Research & Consulting, March 2025.
Discussions around artificial intelligence (AI) in legal practice are often polarised. Some lawyers believe AI is the future of the profession, set to transform everything from legal research to drafting and case strategy. Others see it as unpredictable, risky and dangerous.
As with most technological shifts, the truth lies somewhere in between. AI is neither a magic wand nor an existential threat – it is a tool. Whether that tool is helpful or harmful depends on how it is used.
AI is evolving rapidly, with new tools emerging daily – each bringing fresh opportunities and concerns. Among the many issues though, two stand out as particularly pressing for legal practitioners: confidentiality and accuracy.
A key concern in legal settings is the potential for confidential or privileged information to be inadvertently shared with an AI tool. Uploading documents or entering prompts without understanding where data is stored or how it might be used can lead to serious breaches.
Just as your professional obligations (and the General Data Protection Regulation) should make you think twice before uploading a confidential document to an online filesharing service without due diligence, you should be cautious about what data you share with any AI tool. In particular, you need to understand its data handling practices, storage locations and privacy terms.
A recent matter underscores this issue. New Scientist, via a Freedom of Information (FOI) request, obtained ChatGPT records linked to Peter Kyle, the UK Minister for Technology. Among the (benign) queries – such as requests to define ‘antimatter’ and ‘quantum’ – lay a broader lesson: even public officials’ interactions with AI can become part of the public record. However, the response from the Department for Science, Innovation and Technology was limited somewhat by the FOI Act, which only requires the government to disclose information it holds.
For lawyers, the burden is arguably higher. Under the GDPR, clients can make Data Subject Access Requests, which may compel lawyers to disclose not only what information they hold, but also how and where AI was used to process it and who else it may have been disclosed to.
While you might, subject to your records and/or an AI tool’s history settings, be able to advise a client what information of theirs you processed with an AI tool, other questions may prove more challenging, such as:
These aren’t merely theoretical questions – they strike at the heart of client trust and regulatory compliance.
Another core issue is accuracy. Generative AI tools can produce fabricated but plausible-sounding outputs – a phenomenon known as hallucination.
Two recent UK cases illustrate the risks:
In both, legal representatives relied on fictitious case citations generated by AI in their submissions. In Ayinde, Mr Justice Ritchie reminded practitioners that ensuring the accuracy of pleadings is their professional responsibility. In Bandla, Mr Justice Fordham imposed a wasted costs order on both the barrister and the solicitors involved.
These rulings reaffirm that while AI can assist, it cannot be relied upon blindly.
AI is not explicitly regulated for legal practice but existing professional rules remain highly relevant.
The Bar Council has issued guidance on the use of generative AI at the Bar (due for an update but still relevant) that highlights the need for barristers to:
The Bar Standards Board (BSB) echoes these points, recommending that barristers think critically about how they leverage AI tools (‘ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice’, BSB, Blog, October 2023).
Addressing confidentiality concerns may mean restricting yourself to in-house tools or those with contractual guarantees over data processing.
Even then, developments aimed at reducing hallucinations (such as ‘Retrieval Augmented Generation’) could result in elements of your input spilling into unanticipated domains via background searches conducted to improve output accuracy. You should therefore check all the settings in the tool and speak to your IT advisers to find out how confidentiality concerns are addressed.
Thereafter, there is a lot you can do to integrate AI into legal workflows more safely and effectively.
I use a structured five-step structured approach – one I liken to ordering a pizza. This focuses on specifying what I need with precision. (After all, only a brave soul walks into a pizza restaurant and says: ‘One pizza, please.’ Besides a blank/sarcastic look, who knows what you’d get!)
Set expectations: ‘I need a draft termination clause for a services contract. But let me give you some specific requirements first.’
(Or: ‘I want a pizza. But let me give you some specifics first.’)
Clarify the scope of the task. For example, that the clause should allow termination for convenience and be governed by English law.
(Or: ‘The pizza is for two people.’)
Add detail that might shape the output, such as it being a rolling monthly contract and that the term should favour the service provider.
(Or: ‘I love olives.’)
Critically assess whether the output meets your requirements and professional standards.
(Or check: ‘Is it a pizza?’)
Elaborate each step as needed and reflect on how the tool responds. If at any time it becomes apparent that the tool has misunderstood, correct it before proceeding.
Steps 4 and 5 require real engagement. To support this process, I recommend five additional techniques to help validate and refine the tool’s output:
At each step, see what new authorities and responses are returned by the tool and use these to inform your own judgement, keeping in mind that you cannot delegate legal responsibility to an algorithm.
As highlighted in the BSB’s recent report Technology and Innovation at the Bar, emerging tools like automated drafting, document review, AI-based research and blockchain offer opportunities for transforming legal services – but also carry risks. Lawyers must therefore engage with AI – not by blindly adopting or rejecting it, but by approaching it critically, intelligently and responsibly.
Guidance on generative AI for the Bar, Bar Council, January 2024.
‘ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice’, Bar Standards Board, Blog, October 2023.
Technology and Innovation at the Bar, Research Report for the Bar Standards Board, Spinnaker Research & Consulting, March 2025.
Chair of the Bar sets out a busy calendar for the rest of the year
By Louise Crush of Westgate Wealth Management
Examined by Marie Law, Director of Toxicology at AlphaBiolabs
Time is precious for barristers. Every moment spent chasing paperwork, organising diaries, or managing admin is time taken away from what matters most: preparation, advocacy and your clients. That’s where Eden Assistants step in
AlphaBiolabs has announced its latest Giving Back donation to RAY Ceredigion, a grassroots West Wales charity that provides play, learning and community opportunities for families across Ceredigion County
Rachel Davenport, Co-founder and Director at AlphaBiolabs, outlines why barristers, solicitors, judges, social workers and local authorities across the UK trust AlphaBiolabs for court-admissible testing
Through small but meaningful efforts, we can restore the sense of collegiality that has been so sorely eroded, says Baldip Singh
Come in with your eyes open, but don’t let fear cloud the prospect. A view from practice by John Dove
Looking to develop a specialist practice? Mariya Peykova discusses the benefits of secondments and her placement at the Information Commissioner’s Office
Anon Academic explains why he’s leaving the world of English literature for the Bar – after all, the two are not as far apart as they may first seem...
Review by Stephen Cragg KC