*/
Lawyers are embracing artificial intelligence (AI) at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. However, this enthusiasm has birthed a dangerous trend known as ‘shadow AI’, where personal or unapproved AI tools are being used for work tasks without oversight. According to Axiom’s 2024 View from the Inside Report, 83% of in-house counsel use AI tools not provided by their organisations – 47% of which operate without any governance policies. Stanford University’s 2025 AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, Kiteworks research found that 47% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches. Alarmingly, one-third of European organisations estimate that 6% to 15% of their sensitive data could be exposed through AI interactions, yet only 22% have implemented technical controls to block unauthorised AI tool access.
All this comes at a time when the regulatory landscape continues to shift. The UK’s General Data Protection Regulation, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The UK’s forthcoming Cyber Security and Resilience Bill, building on the EU’s Network and Information Systems (NIS2) Directive and work carried out by the previous government, will signal increased scrutiny of AI governance and greater regulation.
The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles and ensure security by design. Yet, when lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information.
Confidentiality and privilege concerns are equally grave. Legal professional privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Trade secrets, merger strategies and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.
To mitigate these risks, organisations must establish a robust AI governance framework. Comprehensive AI usage policies should outline acceptable tools, data handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.
Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.
Training and awareness underpin effective governance. Mandatory training for all lawyers and staff should cover the technical and legal risks of AI, including GDPR obligations and professional regulatory requirements.
The urgency is undeniable. Organisations must act now to audit AI usage, implement controls and educate. By balancing innovation with risk management, lawyers can protect sensitive data, uphold client trust and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.
Lawyers are embracing artificial intelligence (AI) at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. However, this enthusiasm has birthed a dangerous trend known as ‘shadow AI’, where personal or unapproved AI tools are being used for work tasks without oversight. According to Axiom’s 2024 View from the Inside Report, 83% of in-house counsel use AI tools not provided by their organisations – 47% of which operate without any governance policies. Stanford University’s 2025 AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, Kiteworks research found that 47% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches. Alarmingly, one-third of European organisations estimate that 6% to 15% of their sensitive data could be exposed through AI interactions, yet only 22% have implemented technical controls to block unauthorised AI tool access.
All this comes at a time when the regulatory landscape continues to shift. The UK’s General Data Protection Regulation, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The UK’s forthcoming Cyber Security and Resilience Bill, building on the EU’s Network and Information Systems (NIS2) Directive and work carried out by the previous government, will signal increased scrutiny of AI governance and greater regulation.
The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles and ensure security by design. Yet, when lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information.
Confidentiality and privilege concerns are equally grave. Legal professional privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Trade secrets, merger strategies and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.
To mitigate these risks, organisations must establish a robust AI governance framework. Comprehensive AI usage policies should outline acceptable tools, data handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.
Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.
Training and awareness underpin effective governance. Mandatory training for all lawyers and staff should cover the technical and legal risks of AI, including GDPR obligations and professional regulatory requirements.
The urgency is undeniable. Organisations must act now to audit AI usage, implement controls and educate. By balancing innovation with risk management, lawyers can protect sensitive data, uphold client trust and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.
Chair of the Bar reports back
Marie Law, Director of Toxicology at AlphaBiolabs
A £500 donation from AlphaBiolabs has been made to the leading UK charity tackling international parental child abduction and the movement of children across international borders
Marie Law, Director of Toxicology at AlphaBiolabs, outlines the drug and alcohol testing options available for family law professionals, and how a new, free guide can help identify the most appropriate testing method for each specific case
By Louise Crush of Westgate Wealth Management
Marie Law, Director of Toxicology at AlphaBiolabs, examines the latest ONS data on drug misuse and its implications for toxicology testing in family law cases
Responding to criticism on the narrow profile of government-instructed counsel, Mel Nebhrajani CB describes the system-wide change at GLD to drive fairer distribution of work and broader development of talent
The odds of success are as unforgiving as ever, but ambition clearly isn’t in short supply. David Wurtzel’s annual deep‑dive into the competition cohort shows who’s entering, who’s thriving and the trends that will define the next wave
Where to start and where to find help? Monisha Shah, Chair of the King’s Counsel Selection Panel, provides an overview of the silk selection process, debunking some myths along the way
Do chatbot providers owe a duty of care for negligent misstatements? Jasper Wong suggests that the principles applicable to humans should apply equally to machines
There is no typical day in the life as a Supreme Court judicial assistant, says Josephine Gillingwater, and that’s what makes the role so enjoyably diverse