*/
Lawyers are embracing artificial intelligence (AI) at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. However, this enthusiasm has birthed a dangerous trend known as ‘shadow AI’, where personal or unapproved AI tools are being used for work tasks without oversight. According to Axiom’s 2024 View from the Inside Report, 83% of in-house counsel use AI tools not provided by their organisations – 47% of which operate without any governance policies. Stanford University’s 2025 AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, Kiteworks research found that 47% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches. Alarmingly, one-third of European organisations estimate that 6% to 15% of their sensitive data could be exposed through AI interactions, yet only 22% have implemented technical controls to block unauthorised AI tool access.
All this comes at a time when the regulatory landscape continues to shift. The UK’s General Data Protection Regulation, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The UK’s forthcoming Cyber Security and Resilience Bill, building on the EU’s Network and Information Systems (NIS2) Directive and work carried out by the previous government, will signal increased scrutiny of AI governance and greater regulation.
The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles and ensure security by design. Yet, when lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information.
Confidentiality and privilege concerns are equally grave. Legal professional privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Trade secrets, merger strategies and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.
To mitigate these risks, organisations must establish a robust AI governance framework. Comprehensive AI usage policies should outline acceptable tools, data handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.
Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.
Training and awareness underpin effective governance. Mandatory training for all lawyers and staff should cover the technical and legal risks of AI, including GDPR obligations and professional regulatory requirements.
The urgency is undeniable. Organisations must act now to audit AI usage, implement controls and educate. By balancing innovation with risk management, lawyers can protect sensitive data, uphold client trust and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.
Lawyers are embracing artificial intelligence (AI) at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. However, this enthusiasm has birthed a dangerous trend known as ‘shadow AI’, where personal or unapproved AI tools are being used for work tasks without oversight. According to Axiom’s 2024 View from the Inside Report, 83% of in-house counsel use AI tools not provided by their organisations – 47% of which operate without any governance policies. Stanford University’s 2025 AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, Kiteworks research found that 47% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches. Alarmingly, one-third of European organisations estimate that 6% to 15% of their sensitive data could be exposed through AI interactions, yet only 22% have implemented technical controls to block unauthorised AI tool access.
All this comes at a time when the regulatory landscape continues to shift. The UK’s General Data Protection Regulation, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The UK’s forthcoming Cyber Security and Resilience Bill, building on the EU’s Network and Information Systems (NIS2) Directive and work carried out by the previous government, will signal increased scrutiny of AI governance and greater regulation.
The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles and ensure security by design. Yet, when lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information.
Confidentiality and privilege concerns are equally grave. Legal professional privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Trade secrets, merger strategies and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.
To mitigate these risks, organisations must establish a robust AI governance framework. Comprehensive AI usage policies should outline acceptable tools, data handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.
Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.
Training and awareness underpin effective governance. Mandatory training for all lawyers and staff should cover the technical and legal risks of AI, including GDPR obligations and professional regulatory requirements.
The urgency is undeniable. Organisations must act now to audit AI usage, implement controls and educate. By balancing innovation with risk management, lawyers can protect sensitive data, uphold client trust and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.
Chair of the Bar reflects on 2025
Q&A with criminal barrister Nick Murphy, who moved to New Park Court Chambers on the North Eastern Circuit in search of a better work-life balance
Revolt Cycling in Holborn, London’s first sustainable fitness studio, invites barristers to join the revolution – turning pedal power into clean energy
Rachel Davenport, Co-founder and Director at AlphaBiolabs, reflects on how the company’s Giving Back ethos continues to make a difference to communities across the UK
By Marie Law, Director of Toxicology at AlphaBiolabs
AlphaBiolabs has made a £500 donation to Sean’s Place, a men’s mental health charity based in Sefton, as part of its ongoing Giving Back initiative
Little has changed since Burns v Burns . Cohabiting couples deserve better than to be left on the blasted heath with the existing witch’s brew for another four decades, argues Christopher Stirling
Six months of court observation at the Old Bailey: APPEAL’s Dr Nisha Waller and Tehreem Sultan report their findings on prosecution practices under joint enterprise
Despite its prevalence, autism spectrum disorder remains poorly understood in the criminal justice system. Does Alex Henry’s joint enterprise conviction expose the need to audit prisons? asks Dr Felicity Gerry KC
With automation now deeply embedded in the Department for Work Pensions, Alexander McColl and Alexa Thompson review what we know, what we don’t and avenues for legal challenge
Why were some Caribbean nations given such dramatically different constitutional frameworks when they gained independence from the UK? Dr Leonardo Raznovich examines the controversial savings clause