Artificial intelligence promises to revolutionise the way in which healthcare is provided. In the future, the exercise of clinical judgement will no longer be the exclusive preserve of human beings. Indeed, this trend is already well under way. Among many other uses, AI is already being developed to predict cancer from mammograms, to monitor skin moles for signs of disease and to perform invasive surgery autonomously.

The disruption caused by these innovations is unlikely to be confined to the practice of medicine. It will also affect the practice of medical law. Some of the possible implications for clinical negligence litigation are discussed below.

The duty of care

At what point will healthcare providers have a positive duty to use AI to provide care? It may be argued in future that the advantages of using AI solutions are so stark that it is irresponsible, or even illogical, not to use them. Alternatively, from the perspective of informed consent, it might be argued that patients should (at least) be counselled as to the benefits of deploying AI for diagnostic or surgical purposes. The strength of these arguments will turn upon the speed of uptake of AI in different areas of medicine. Guidance given by organisations like NICE and the Royal Colleges is also likely to be important.

From the perspective of healthcare defendants, the imposition of a duty to use AI might well raise resourcing issues. New technology can be expensive. Difficult judgements will need to be made about the allocation of limited budgets to maximum advantage. It may be argued that decisions whether or not to allocate scarce resources to AI are not justiciable.

The standard of care

An unspoken assumption of the law of negligence is that people are fallible. To be human is to err. Therefore, perfect clinical judgement is not expected from doctors. Liability is imposed only where a doctor has failed to take ‘reasonable care’. On the other hand, the promise of AI is that machines do not suffer from the same imperfections. Their memory and processing speed are far superior. They do not experience workplace fatigue. They can perceive patterns in data that would be invisible to the human eye or mind. This raises some interesting questions about the standard of care to be expected.

What happens if, for example, an AI system makes recommendations that would not be supported by a responsible body of practitioners? At first blush the answer looks simple: guidance from a computer is not a substitute for the exercise of clinical judgement. A doctor should always make decisions that accord with a ‘responsible body’ of practitioners.

However, early adopters might argue that the very purpose of AI is to expose flaws in conventional medical wisdom. For example, by analysing big data, AI might identify concerning patterns in breast imaging that most radiologists would not consider anomalous. Or it might identify drug combinations for cancer that leading oncologists would consider to be counter intuitive. One could argue that it would be wrong to deprive patients of the benefits of such insights. Perhaps the answer is that patients should given the right to chose whether to accept virtual or corporeal advice. At some point, the courts will have to grapple with this tension between conventional wisdom and computing power.

A related question is whether AI will change the way in which the standard of care itself is conceptualised. To be defensible at common law medical practice must align with a ‘responsible body of medical practitioners’ (Bolam) and be capable of withstanding ‘logical analysis’ (Bolitho). It is not easy to apply that test to guidance provided by non-human intelligence; for reasons that may not be readily intelligible to human beings.

Liability of individual doctors

Should healthcare practitioners be liable where treatment assisted by AI goes wrong? On a conventional analysis, the answer to that question looks straightforward. AI could be viewed as just another technological tool used by doctors to deliver treatment. Alternatively, where AI assists in the decision-making process, one could draw an analogy with a consultant that is supervising recommendations made by a junior doctor. However, these analogies may not be apposite for two reasons.

First, AI enabled systems might in future provide treatment autonomously. For example, a team at John Hopkins University has developed a robot that has performed laparoscopic surgery on the tissue of a pig without the guiding hand of a human: the Smart Tissue Autonomous Robot (STAR).

Second, there is the ‘black box’ problem. We see the input and output but what happens inside can be a mystery. It may be impossible for a human to understand in real time why an AI system is making any particular decision or recommendation. This may be due to the amount and complexity of the data that is being processed, the speed of processing or the fact that AI does not use natural human language to parse data.

Accordingly, the courts might well conclude that it is unfair to make individual healthcare practitioners responsible for the real time operation of AI. Putting the matter another way, the use of AI may be considered more analogous to making a referral to a specialist than to overseeing a junior doctor.

Liability of healthcare institutions

If individual practitioners are not responsible, the courts may look for ways to make healthcare institutions liable. There could be different ways to achieve this. One alternative would be to impose a conventional duty of care to ensure that ‘equipment’ is functioning properly. Such duties might include duties to audit, test and maintain AI systems in line with standards imposed by manufacturers and regulatory bodies such as the Medicines and Healthcare products Regulatory Agency (MHRA).

Alternatively, Parliament might consider it necessary to impose a form of strict liability on healthcare providers for harms caused by medical AI. This is because reliance upon pure fault-based liability places an unreasonable burden upon claimants. Where matters go wrong the ‘black box’ problem makes it very difficult for a claimant to pinpoint how an error has arisen and who (if anybody) might be responsible for it.

The policy justification for imposing strict liability is arguably similar to that for imposing vicarious liability. Employers already bear the financial risk of harm inherent in the provision of healthcare by their employees. Similarly healthcare institutions should bear the financial risk of harm from deploying AI to provide medical care. Institutions are also better able to insure against the risk of harm where AI goes wrong.

The practice of clinical negligence

If strict liability is not imposed, the need to establish fault will surely disrupt the way in which lawyers litigate clinical negligence claims. Errors could arise from conduct by wide range of actors for a wide range of reasons. Those responsible for mishaps might include: software developers, data inputters, manufacturers, maintenance engineers and clinical technicians. The requirements for disclosure and expert evidence in such claims would be bear little resemblance to that that required in a conventional clinical negligence claim.

Such claims are also likely to require solicitors, barristers, and judges with new kinds of expertise. As AI forges new frontiers in healthcare, it is also likely to reshape the contours of clinical negligence law. Like the medical profession, the legal profession and judiciary will need to prepare and adapt.