*/
Do chatbot providers owe a duty of care for negligent misstatements? Jasper Wong suggests that the principles applicable to humans should apply equally to machines
If I release a chatbot and market it to users as an intelligent assistant, do I owe a duty of care to customers in respect of any negligent misstatements it makes? The (pre-ChatGPT) leading UK Supreme Court authorities, Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20 and Khan v Meadows [2021] UKSC 21, suggest the answer is yes, even though the issue awaits authoritative judicial ruling.
Chatbots have become part of our everyday lives. Few of us can afford to avoid them entirely. Yet, so far, this basic question appears to be entirely open.
The only authority directly on point seems to be Moffatt v Air Canada, 2024 BCCRT 149, a Canadian Civil Resolution Tribunal decision regarding a negligent statement by an airline’s chatbot, where the tribunal member held there was a duty of care without further reference to authority.
The lack of authority, in itself, is not altogether surprising. In the common law tradition, some seemingly basic questions remain unresolved. Suppose a thief steals my bike, to which I have no particular emotional attachment. Suppose (for example, because of a technical error in the prosecution) the criminal charge is dismissed and I successfully sue the thief in the civil courts. Am I entitled to have the specific bike back, or am I only entitled to monetary compensation?
While subject to frequent academic debate, the question remains unresolved, not least because in practice the issue is resolved otherwise than by litigation.
The issue of liability for a chatbot’s negligent misstatement may be similarly impractical to litigate. In the absence of organised strategic litigation, it is hard to see how an individual who suffered a negligent misstatement by (say) ChatGPT would find it worthwhile to take OpenAI to court.
But if the AI optimists are correct, this situation would not remain for long. Businesspeople and politicians constantly claim that chatbots will take jobs away and replace human employees.
For such chatbots, it stands to reason that the relationship between users and providers will be governed by contract law (including statutory controls on unfair terms). But it is well established that, in these situations, the adviser/information provider also owes a parallel duty of care in tort.
Assuming no specific legislation steps in to address the issue (for example, due to political disagreement), would the common law impose a duty of care on chatbot providers for negligent misstatements?
The leading cases on the principles behind duty of care in negligent misstatements are the UK Supreme Court cases of Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20 (MBS v Grant Thornton) and Khan v Meadows [2021] UKSC 21.
MBS v Grant Thornton sets down the authoritative six-step analysis for the court to establish whether a duty of care exists in a novel situation, while Khan v Meadows confirms that the six-step test is applicable beyond negligent financial advice (and covers, among others, negligent medical advice, as in that case).
The analysis below therefore focuses on the six-step test:
Among these steps, it is likely that the second step is the most difficult to grapple with.
For the first step, there are established categories of loss or damage that are actionable in negligence. Relevant to chatbots, this would include liability for negligent misstatements, the classic examples being negligent valuations, legal advice and medical advice. On the other hand, there are also established categories where the adviser owes no duty of care in negligence, for example, auditors to potential buyers of a company.
In most cases, the third, fourth, fifth and sixth questions are likely to be fact-specific and dependent on the precise facts involved.
By contrast, what exactly falls within a chatbot’s scope of duty is likely to be a mixed question of fact and law. It will depend heavily on the representations made by the provider of the chatbot as to what it can do, and what the court holds to be the purpose of the representations.
If marketed as an ‘intelligent assistant’, there is an argument that the purpose of the chatbot is to provide a level of service comparable to a human secretary, and that information and advice to that level fall within the provider’s scope of duty.
There is an argument that this reflects the policy rationale to allocate responsibility for any loss incurred by the user in a way that fairly reflects the assumption of risk implicit in the service the provider agreed to supply.
But which representations made by the chatbot are relevant to defining the scope of duty?
At one extreme, it may be that only contractual terms and express, non-modifiable warnings displayed by the provider define the scope of duty. But this is too limited. If a human being offers legal or medical advice in detail and for a fee, it would be odd for her scope of duty in tort to be reduced to nothing simply by a general disclaimer.
At the other extreme, it may be that any representations made by chatbots are capable of defining the scope of duty. Under this analysis, the moment a chatbot makes representations amounting to legal or medical advice, any harms arising from such advice fall within its scope of duty.
But, as is now well known, chatbots sometimes ‘say’ surprising things when deliberately manipulated. There is a discipline called ‘prompt injection’ which expressly aims to ‘jailbreak’ and elicit such behaviour.
While much will depend on the facts of the specific case, it is suggested that the court will avoid either extreme and apply a purposive approach in considering all representations that could define the scope of duty.
In a lecture to the Professional Negligence Bar Association in 2024, the Master of the Rolls, Sir Geoffrey Vos, mused on AI’s potential impact on professional negligence, both in the (then) current stage of development and when the ‘machine age’ is further advanced.
If the occasion arises, it will be of interest to see how the English courts wrestle with the scope of duty issues involved in the unmediated use of chatbots for the provision of information and advice.

In Moffatt v Air Canada, 2024 BCCRT 149, the British Columbia Civil Resolution Tribunal found in favour of the consumer’s claim of negligent misrepresentation. It held that Air Canada was responsible for all the information on its website, whether a static webpage or via a chatbot, and awarded damages of approximately $650 CAD. Read more on the American Bar Association website.
Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20
Khan v Meadows [2021] UKSC 21
Moffatt v Air Canada, 2024 BCCRT 149
‘Damned if you do and damned if you don’t: is using AI a brave new world for professional negligence?’, Speech by the Master of the Rolls, Sir Geoffrey Vos, to the Professional Negligence Bar Association on 22 May 2024:
If I release a chatbot and market it to users as an intelligent assistant, do I owe a duty of care to customers in respect of any negligent misstatements it makes? The (pre-ChatGPT) leading UK Supreme Court authorities, Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20 and Khan v Meadows [2021] UKSC 21, suggest the answer is yes, even though the issue awaits authoritative judicial ruling.
Chatbots have become part of our everyday lives. Few of us can afford to avoid them entirely. Yet, so far, this basic question appears to be entirely open.
The only authority directly on point seems to be Moffatt v Air Canada, 2024 BCCRT 149, a Canadian Civil Resolution Tribunal decision regarding a negligent statement by an airline’s chatbot, where the tribunal member held there was a duty of care without further reference to authority.
The lack of authority, in itself, is not altogether surprising. In the common law tradition, some seemingly basic questions remain unresolved. Suppose a thief steals my bike, to which I have no particular emotional attachment. Suppose (for example, because of a technical error in the prosecution) the criminal charge is dismissed and I successfully sue the thief in the civil courts. Am I entitled to have the specific bike back, or am I only entitled to monetary compensation?
While subject to frequent academic debate, the question remains unresolved, not least because in practice the issue is resolved otherwise than by litigation.
The issue of liability for a chatbot’s negligent misstatement may be similarly impractical to litigate. In the absence of organised strategic litigation, it is hard to see how an individual who suffered a negligent misstatement by (say) ChatGPT would find it worthwhile to take OpenAI to court.
But if the AI optimists are correct, this situation would not remain for long. Businesspeople and politicians constantly claim that chatbots will take jobs away and replace human employees.
For such chatbots, it stands to reason that the relationship between users and providers will be governed by contract law (including statutory controls on unfair terms). But it is well established that, in these situations, the adviser/information provider also owes a parallel duty of care in tort.
Assuming no specific legislation steps in to address the issue (for example, due to political disagreement), would the common law impose a duty of care on chatbot providers for negligent misstatements?
The leading cases on the principles behind duty of care in negligent misstatements are the UK Supreme Court cases of Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20 (MBS v Grant Thornton) and Khan v Meadows [2021] UKSC 21.
MBS v Grant Thornton sets down the authoritative six-step analysis for the court to establish whether a duty of care exists in a novel situation, while Khan v Meadows confirms that the six-step test is applicable beyond negligent financial advice (and covers, among others, negligent medical advice, as in that case).
The analysis below therefore focuses on the six-step test:
Among these steps, it is likely that the second step is the most difficult to grapple with.
For the first step, there are established categories of loss or damage that are actionable in negligence. Relevant to chatbots, this would include liability for negligent misstatements, the classic examples being negligent valuations, legal advice and medical advice. On the other hand, there are also established categories where the adviser owes no duty of care in negligence, for example, auditors to potential buyers of a company.
In most cases, the third, fourth, fifth and sixth questions are likely to be fact-specific and dependent on the precise facts involved.
By contrast, what exactly falls within a chatbot’s scope of duty is likely to be a mixed question of fact and law. It will depend heavily on the representations made by the provider of the chatbot as to what it can do, and what the court holds to be the purpose of the representations.
If marketed as an ‘intelligent assistant’, there is an argument that the purpose of the chatbot is to provide a level of service comparable to a human secretary, and that information and advice to that level fall within the provider’s scope of duty.
There is an argument that this reflects the policy rationale to allocate responsibility for any loss incurred by the user in a way that fairly reflects the assumption of risk implicit in the service the provider agreed to supply.
But which representations made by the chatbot are relevant to defining the scope of duty?
At one extreme, it may be that only contractual terms and express, non-modifiable warnings displayed by the provider define the scope of duty. But this is too limited. If a human being offers legal or medical advice in detail and for a fee, it would be odd for her scope of duty in tort to be reduced to nothing simply by a general disclaimer.
At the other extreme, it may be that any representations made by chatbots are capable of defining the scope of duty. Under this analysis, the moment a chatbot makes representations amounting to legal or medical advice, any harms arising from such advice fall within its scope of duty.
But, as is now well known, chatbots sometimes ‘say’ surprising things when deliberately manipulated. There is a discipline called ‘prompt injection’ which expressly aims to ‘jailbreak’ and elicit such behaviour.
While much will depend on the facts of the specific case, it is suggested that the court will avoid either extreme and apply a purposive approach in considering all representations that could define the scope of duty.
In a lecture to the Professional Negligence Bar Association in 2024, the Master of the Rolls, Sir Geoffrey Vos, mused on AI’s potential impact on professional negligence, both in the (then) current stage of development and when the ‘machine age’ is further advanced.
If the occasion arises, it will be of interest to see how the English courts wrestle with the scope of duty issues involved in the unmediated use of chatbots for the provision of information and advice.

In Moffatt v Air Canada, 2024 BCCRT 149, the British Columbia Civil Resolution Tribunal found in favour of the consumer’s claim of negligent misrepresentation. It held that Air Canada was responsible for all the information on its website, whether a static webpage or via a chatbot, and awarded damages of approximately $650 CAD. Read more on the American Bar Association website.
Manchester Building Society v Grant Thornton UK LLP [2021] UKSC 20
Khan v Meadows [2021] UKSC 21
Moffatt v Air Canada, 2024 BCCRT 149
‘Damned if you do and damned if you don’t: is using AI a brave new world for professional negligence?’, Speech by the Master of the Rolls, Sir Geoffrey Vos, to the Professional Negligence Bar Association on 22 May 2024:
Do chatbot providers owe a duty of care for negligent misstatements? Jasper Wong suggests that the principles applicable to humans should apply equally to machines
Chair of the Bar finds common ground on legal services between our two jurisdictions, plus an update on jury trials
A £500 donation from AlphaBiolabs has been made to the leading UK charity tackling international parental child abduction and the movement of children across international borders
Marie Law, Director of Toxicology at AlphaBiolabs, outlines the drug and alcohol testing options available for family law professionals, and how a new, free guide can help identify the most appropriate testing method for each specific case
By Louise Crush of Westgate Wealth Management
Marie Law, Director of Toxicology at AlphaBiolabs, examines the latest ONS data on drug misuse and its implications for toxicology testing in family law cases
An interview with Rob Wagg, CEO of New Park Court Chambers
The deprivation of liberty is the most significant power the state can exercise. Drawing on frontline experience, Chris Henley KC explains why replacing trial by jury with judge-only trials risks undermining justice
Ever wondered what a pupillage is like at the CPS? This Q and A provides an insight into the training, experience and next steps
The appointments of 96 new King’s Counsel (also known as silk) are announced today
Resolution of the criminal justice crisis does not lie in reheating old ideas that have been roundly rejected before, say Ed Vickers KC, Faras Baloch and Katie Bacon
With pupillage application season under way, Laura Wright reflects on her route to ‘tech barrister’ and offers advice for those aiming at a career at the Bar