If lawyers using AI forget it is just a tool, and stop being ethical responsible lawyers, it will cause mayhem – Flora Page KC explores the legal and ethical pitfalls and offers essential guidance on developing good, safe AI practices in-house
As soon as lawyers start to play with artificial intelligence, they realise how powerful it is. If all you’ve done is have a go on ChatGPT you won’t get it, because AI tools can be so much better than that.
The AI-assisted tools on the big legal research platforms get you to the cases or regulations you need in a flash. ‘Walled garden’ tools only search the documents you upload, excluding all the nonsense out there on the web, so you can really trust the answers you receive, because the footnotes will show you exactly where they came from. AI-generated drafts of documents are so much easier to adapt than old-fashioned precedents. The list goes on.
That’s why use is spreading rapidly, and that’s why it is vitally important for legal workplaces to adopt good, safe practices. In the absence of good, safe practices, individuals will still be using AI on their phones, or via the free AI tools that are now all over common workplace software, and this ad hoc use is risky.
The crucial point is this: AI is just a productivity tool. Human lawyers must remain responsible for the final output of any AI assisted legal process. Ethics and emotional intelligence remain vitally important to being a good lawyer, and although AI tools can simulate these human attributes, only humans actually have these attributes.
So how do we ensure that AI helps our productivity as lawyers, while embedding good practice into every AI-assisted finished product?
Uses for in-house teams
AI is quicker and more consistent than humans when analysing high-volume data. Examples include:
- Contract lifecycle management (CLM): AI can automate the drafting, reviewing, and tracking of contracts. It can flag non-standard clauses, identify potential risks and ensure compliance with company policies.
- Disclosure, due diligence and legal research: Advanced search algorithms can sift through vast databases, identifying the needles in the haystack far more quickly than
a lawyer.
- Predictive analytics: By analysing past litigation outcomes or regulatory trends, AI can help in-house counsel make more informed decisions about strategy and resource allocation.
- Running cases: Some legal teams are using agentic AI to run cases, by sending and responding to routine communications and generating standard documents in accordance with those communications.
The pitfalls
If lawyers using AI forget it is just a tool, and stop being ethical, responsible lawyers, it will cause mayhem. These are just some of the problems we are already aware of:
- Hallucinations and accuracy: No doubt we have all read about lawyers who have relied on AI-generated citations or legal summaries without verification. There is no excuse for doing your legal research exclusively on free-to-use AI that scrapes the web. If you start off there, you must use more orthodox sources to verify your output. If you have the budget, the best approach is to use the AI functionality on the big legal research platforms, which only search verified data.
- Data privacy and confidentiality: Inputting sensitive company or client data into free-to-use open-source AI tools is an absolute non-starter. This is probably the most important thing to get a grip on if you work in an in-house team that has not yet created safe AI practices. The chances are that a junior member of your team has already revealed something confidential about a client or a case by asking ChatGPT a question.
- Quality standards: If an AI tool scours the web for answers, for every good example of a freely available legal document, it may find ten bad examples. That means the AI output will be bad. Even if you use a more bespoke AI tool, the quality of the output is dependent on the quality of the input, so the models used to train the AI are important. (Think about those terrible AI-generated poems and how much better they would be if the AI were only learning from the best poets.)
- Bias and fairness: AI models are trained on historical data, which may contain inherent biases. This can lead to skewed results in areas like employment law or risk assessment.
- The ‘black box’ problem: AI algorithms are complex and opaque, making it difficult to understand how they reached a particular conclusion. This means it is absolutely vital that lawyers do not use AI to reach conclusions as an independent, unsupervised process. If a lawyer has relied on a ‘black box’ process when reaching a legal decision that must be transparent, particularly in legal proceedings, so that all parties can consider the validity and fairness of it.
How to develop good, safe AI practices
If you are integrating AI into a team, bear these pointers in mind:
- Focus on the purpose: Think about the work of the team, and the areas where AI could help to crunch through high volume data, or repetitive workflows. Identify where it can help with productivity. Ask the team where they are already using it, and what for, with a view to replacing the ad hoc use of free-to-use open-source software with products that fit the work your team does.
- Conduct rigorous due diligence on vendors: Ask about their data security measures, the source of their training data, and how they address bias and hallucinations.
- Establish clear policies and training: Develop internal guidelines on the acceptable use of AI, emphasising the need for ethics and emotional intelligence, and the importance of lawyers being transparent about the AI tools they use, and not allowing it to replace legal decision-making.
- Take it steady: Diarise times to review how the AI tools are bedding in, and dip sample, to make sure things are working as they should. If you are thinking of implementing an AI process that does not involve human oversight of the finished product, you need to do much more than read this article!
- Think ‘what if’? This kind of counter-factual question remains an important human skill which AI has not yet cracked (some say it never will). When taking decisions based on AI conclusions, it is valuable to critically analyse those conclusions by asking counter-factual questions like: ‘Would the case still have the same chance of success if this assumption was changed?’ It is particularly important to consider countering any possible bias in the AI-generated conclusions by asking counter-factual questions.
The Bar Council updated its guidance on the use of ChatGPT, and other generative AI large language model systems (LLMs) in November 2025. Read the guidance on the Bar Council Ethics and Practice hub. See also the Bar Standards Board research on technology adoption at the Bar.
What if..? AI in legal workplaces
Date: 13 April 2026
As soon as lawyers start to play with artificial intelligence, they realise how powerful it is. If all you’ve done is have a go on ChatGPT you won’t get it, because AI tools can be so much better than that.
The AI-assisted tools on the big legal research platforms get you to the cases or regulations you need in a flash. ‘Walled garden’ tools only search the documents you upload, excluding all the nonsense out there on the web, so you can really trust the answers you receive, because the footnotes will show you exactly where they came from. AI-generated drafts of documents are so much easier to adapt than old-fashioned precedents. The list goes on.
That’s why use is spreading rapidly, and that’s why it is vitally important for legal workplaces to adopt good, safe practices. In the absence of good, safe practices, individuals will still be using AI on their phones, or via the free AI tools that are now all over common workplace software, and this ad hoc use is risky.
The crucial point is this: AI is just a productivity tool. Human lawyers must remain responsible for the final output of any AI assisted legal process. Ethics and emotional intelligence remain vitally important to being a good lawyer, and although AI tools can simulate these human attributes, only humans actually have these attributes.
So how do we ensure that AI helps our productivity as lawyers, while embedding good practice into every AI-assisted finished product?
Uses for in-house teams
AI is quicker and more consistent than humans when analysing high-volume data. Examples include:
- Contract lifecycle management (CLM): AI can automate the drafting, reviewing, and tracking of contracts. It can flag non-standard clauses, identify potential risks and ensure compliance with company policies.
- Disclosure, due diligence and legal research: Advanced search algorithms can sift through vast databases, identifying the needles in the haystack far more quickly than
a lawyer.
- Predictive analytics: By analysing past litigation outcomes or regulatory trends, AI can help in-house counsel make more informed decisions about strategy and resource allocation.
- Running cases: Some legal teams are using agentic AI to run cases, by sending and responding to routine communications and generating standard documents in accordance with those communications.
The pitfalls
If lawyers using AI forget it is just a tool, and stop being ethical, responsible lawyers, it will cause mayhem. These are just some of the problems we are already aware of:
- Hallucinations and accuracy: No doubt we have all read about lawyers who have relied on AI-generated citations or legal summaries without verification. There is no excuse for doing your legal research exclusively on free-to-use AI that scrapes the web. If you start off there, you must use more orthodox sources to verify your output. If you have the budget, the best approach is to use the AI functionality on the big legal research platforms, which only search verified data.
- Data privacy and confidentiality: Inputting sensitive company or client data into free-to-use open-source AI tools is an absolute non-starter. This is probably the most important thing to get a grip on if you work in an in-house team that has not yet created safe AI practices. The chances are that a junior member of your team has already revealed something confidential about a client or a case by asking ChatGPT a question.
- Quality standards: If an AI tool scours the web for answers, for every good example of a freely available legal document, it may find ten bad examples. That means the AI output will be bad. Even if you use a more bespoke AI tool, the quality of the output is dependent on the quality of the input, so the models used to train the AI are important. (Think about those terrible AI-generated poems and how much better they would be if the AI were only learning from the best poets.)
- Bias and fairness: AI models are trained on historical data, which may contain inherent biases. This can lead to skewed results in areas like employment law or risk assessment.
- The ‘black box’ problem: AI algorithms are complex and opaque, making it difficult to understand how they reached a particular conclusion. This means it is absolutely vital that lawyers do not use AI to reach conclusions as an independent, unsupervised process. If a lawyer has relied on a ‘black box’ process when reaching a legal decision that must be transparent, particularly in legal proceedings, so that all parties can consider the validity and fairness of it.
How to develop good, safe AI practices
If you are integrating AI into a team, bear these pointers in mind:
- Focus on the purpose: Think about the work of the team, and the areas where AI could help to crunch through high volume data, or repetitive workflows. Identify where it can help with productivity. Ask the team where they are already using it, and what for, with a view to replacing the ad hoc use of free-to-use open-source software with products that fit the work your team does.
- Conduct rigorous due diligence on vendors: Ask about their data security measures, the source of their training data, and how they address bias and hallucinations.
- Establish clear policies and training: Develop internal guidelines on the acceptable use of AI, emphasising the need for ethics and emotional intelligence, and the importance of lawyers being transparent about the AI tools they use, and not allowing it to replace legal decision-making.
- Take it steady: Diarise times to review how the AI tools are bedding in, and dip sample, to make sure things are working as they should. If you are thinking of implementing an AI process that does not involve human oversight of the finished product, you need to do much more than read this article!
- Think ‘what if’? This kind of counter-factual question remains an important human skill which AI has not yet cracked (some say it never will). When taking decisions based on AI conclusions, it is valuable to critically analyse those conclusions by asking counter-factual questions like: ‘Would the case still have the same chance of success if this assumption was changed?’ It is particularly important to consider countering any possible bias in the AI-generated conclusions by asking counter-factual questions.
The Bar Council updated its guidance on the use of ChatGPT, and other generative AI large language model systems (LLMs) in November 2025. Read the guidance on the Bar Council Ethics and Practice hub. See also the Bar Standards Board research on technology adoption at the Bar.
If lawyers using AI forget it is just a tool, and stop being ethical responsible lawyers, it will cause mayhem – Flora Page KC explores the legal and ethical pitfalls and offers essential guidance on developing good, safe AI practices in-house
SourceURL:
Links: