Microsoft’s ‘CoPilot Chat’ is now being rolled out on judicial office holders’ devices. The tool is to assist judges, who remain personally responsible for material which is produced in their name.

Concerns over the potential unfairness, discrimination and opacity of AI are not new. But is enough awareness being raised about how technology-facilitated misogyny and abuse against women and girls may have an impact upon the use and performance of these technologies in the justice system?

The AI Guidance for Judicial Office Holders (31 October 2025) warns users to ‘be aware of bias’. The judiciary is generally already very familiar with addressing bias. But emerging technologies are presenting new challenges. This article focuses on gender bias but clearly, it can take many forms (and many of the issues raised also apply to AI use in the justice system more generally).

How: machines can regurgitate or amplify gender inequality

Generative Artificial Intelligence (GenAI) is the type of Al that creates content (e.g., CoPilot, ChatGPT and other models that respond to written prompts). They can do so because they have consumed huge amounts of data during their training; they then use that input to generate their own content. However, GenAI models risk regurgitating, or worse amplifying, the harms and inequalities inherent within the material they have consumed.

An example of this is Microsoft’s experimental chatbot called ‘Tay’. Microsoft’s aim was to ‘experiment with and conduct research on conversational understanding’, with Tay able to learn from the conversations and get progressively ‘smarter’. Within hours, the bot started spouting foul racist and sexist abuse (‘I f*cking hate feminists and they should all die and burn in hell’… ‘Hitler was right I hate the jews’). Microsoft promptly took it offline.

Within 24 hours, the experiment unveiled the risks of Al absorbing and regurgitating human prejudice, which in turn risks normalising it online.

In 2024, UNESCO carried out a study of popular generative Al platforms, including OpenAI’s GPT-3.5 and GPT-2, and Meta’s Llama 2. It found ‘unequivocal evidence of bias against women in content generated’, as well as homophobia and racial stereotyping.

There is a need for increasing awareness about this because generative AI uses machine learning techniques to continue to create new data that has similar characteristics to the data that goes into it. And online harms are getting worse, with a disproportionate impact on women, girls and minority groups.

Why: increasing extreme misogyny online

In August 2024, the Home Office announced that extreme misogyny will be treated as a form of extremism.

As part of ongoing attempts to tackle online harms, the Parliamentary Research Briefing, ‘Violence against women and girls in schools and among children and young people’ dated 7 August 2025, summarised recent studies about inequality online as follows:

  • A 2023 Ofcom study found 79% of young people aged 13-17 used GenAI, compared with 31% of users aged 16 and above;
  • GenAI is being used as a tool to perpetrate widespread violence against women and girls. The organisation ‘Internet Matters’ discussed ‘the new face of digital abuse’ in its 2024 report on AI-generated sexual imagery. The report called the issue an ‘epidemic’ and cited 13% of teenagers who had reported an experience with nude deepfakes in British schools;
  • A 2025 Ofcom report noted the manosphere (a group of online communities and influencers associated with topics of masculinity, relationships and gender) includes extreme misogyny and incel culture, it is becoming more mainstream and can radicalise young men through algorithm mechanisms;
  • Girlguiding reported 60% of boys see manosphere content through algorithms;
  • Harmful online behaviours are spilling into schools with widespread sexual harassment and normalisation of harmful behaviours. Ofsted found that 9 in 10 girls had experienced sexist name-calling or unsolicited sexual images;
  • Girls in social care and the criminal justice system are among the most vulnerable. Black girls are more likely to experience adultification and online abuse.

Journalist Laura Bates, in her book, The New Age of Sexism: How the AI Revolution is Reinventing Misogyny (Simon & Schuster UK: 2025) cites Baroness Helena Kennedy: ‘the law has always trailed behind social change’ and ‘women have always had to struggle to get the law to deliver for them’. Baroness Kennedy gave her the example of the decades-long efforts to make the courts recognise coercive control as a form of domestic abuse, which she compares to the emotional trauma of experiencing sexual violence in an environment like the metaverse, suggesting that it may take a long time for such harms to be recognised in law.

When: types of cases vulnerable to AI gender bias

Machines assisting judges by summarising evidence, or creating a first draft judgment, in cases involving controlling or coercive behaviour, online harms or discrimination, would require close scrutiny of the result. Those types of cases may also be more vulnerable to fabricated caselaw or misunderstanding of the law by the machine, particularly where the law is complex or developing.

In any instance where the machine is invited to be creative with text, it may not use inclusive language.

There is a clear need to balance the speed, efficiency and social benefits which AI has to offer, with caution and the need to spread awareness about concerns such as the potential for bias. For now, at least, AI tools assist the judiciary, but they are not replacing direct judicial engagement with evidence. Any user of AI must keep the risks of inherent bias in mind and be ready to correct it, particularly when cases involve data that is vulnerable to bias.

Debate before too late

There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. The government’s policy paper, ‘AI action plan for justice’ dated 31 July 2025, promises to ‘embed AI across justice services… while still preserving the right to have any final determination made by a human judge’.

With reference to rapidly developing AI technology, on 15 October 2025, Sir Geoffrey Vos gave a speech calling for a ‘serious debate now, before it is too late’, to consider: (a) what human rights people should have in the light of ever more capable AI, and (b) what decisions should remain in human hands.

How can the debate be helped along by the legal profession?

1. Filling knowledge gaps

There are many members of the legal profession now discussing AI, but few opportunities for them to do so in a manner that enables cross-collaboration with the technology experts who may be able to fill gaps in their knowledge (for instance, about the likely effectiveness of de-biassing techniques). Lack of transparency in AI technology makes an informed discussion difficult for all of us. Lack of technical expertise in the subject matter makes it even harder. Confidentiality rings could be used as appropriate, where there are sensitive data issues.

2. Greater diversity among those debating

According to the Alan Turing Institute, only 22% of AI/data professionals are women, presenting the obvious risk that the lack of diversity in the AI ecosystem results in harmful feedback loops of social bias being built into machine learning systems. Further, developers are unlikely to have expertise in law and human rights. Any tools developed by private companies are more likely to be driven by efficiency of market solutions which can generate profits, rather than equality and the rule of law. So it’s all the more important that everyone is advocated for by the legal profession.

3. Bridging generation gaps

In May 2025, the Commons Home Affairs Committee opened an inquiry: ‘Combatting new forms of extremism’. In October 2025, the Metropolitan Police Service gave evidence that one of the factors contributing to the rise in misogyny extremism is the growing divide between the extremely internet literate younger generation and an older generation who are largely unaware of the risks and/or are unable or unwilling to regulate use of the internet.

It is this ‘growing divide’ between the generations that must be bridged before it risks leaving vulnerable women and girls exposed, and boys radicalised, as those more likely to help them are often unaware of the extent of online harms, and how they may influence harmful algorithms, AI stereotyping and generally contaminate data input. Views from the youngest members of the legal profession could be invaluable.

4. Closely scrutinising the ‘automated decision-making’ being rolled out in the justice system

AI, algorithmic and automated tools are increasingly being used across the public sector to make decisions affecting individuals. The Public Law Project is tracking examples of automated decision making (ADM) in government on its ‘TAG register’.

The Public Law Project states that, as of 17 November 2025, there are 55 automated tools tracked on the register; 83.6% of these tools were only uncovered or more fully understood through the submission of FOIA requests.

On 5 November 2025, Lord Sales explored some of the ways in which administrative law (specifically, judicial review methodology) is and is not yet adequately suited to scrutinising the use of AI and ADM in the administrative sphere (‘AI and Public Law: Automated Decision-Making in Government’). He observed that, as opacity of an ADM system could make it very difficult to detect bias, at least until the ADM system has produced a substantial number of decisions in which patterns can be identified, courts and litigants could become more proactive in identifying cases which raise systemic issues and marshalling them together in a composite procedure, by using pilot cases or group litigation techniques.

There are already several well-known instances of the use of opaque algorithms in administrative decision-making in the UK: the Home Office has used a ‘visa streaming’ tool to grade entry visa applications, assigning risk ratings that significantly impact application outcomes; the Department for Work and Pensions is using an opaque algorithm to determine which claims for universal credit it would investigate; local authorities in the UK have been applying algorithms that are difficult to scrutinise to support decisions on transportation, houses in multiple occupation, and children’s social care.

Ongoing analysis of the automated decision making starting to be rolled out across the justice system, and the adequacy of their transparency levels, will be crucial to help inform the debate about the extended use of AI within the justice system.

5. More communication between the legal profession and the MOJ

Drawing the threads together, there need to be more mechanisms through which the legal profession can bring their experience, queries and concerns to the table, enabling the MOJ to consider them. Experience from the legal profession ought to help inform the design, development and intended use of AI in the justice system, to ensure that a modern, inclusive application of the rule of law and human rights are built in. Once bias is embedded in a machine, it is much harder to diagnose and resolve the issue, and it may be even harder to challenge it.

In February, Sophie is speaking at the Westminster Legal Policy Forum, where stakeholders and policymakers will consider the ‘Next steps for AI use in the justice system and courts modernisation in England and Wales’. She welcomes readers’ input to help further the debate.


 

The Artificial Intelligence (AI) Guidance for Judicial Office Holders (31 October 2025) sets out key risks and issues associated with using AI – and some suggestions for minimising them.