The use of automation and artificial intelligence (AI) in public decision-making has expanded significantly over the past few years, and enthusiasm for its continued growth has not waned. In January this year, the government announced its intention to ‘mainline AI into the veins of this enterprising nation’. However, increased reliance on AI in the public sphere has consistently been accompanied by serious concerns from civil society about a lack of transparency and accountability, as well as threats to equality and fundamental rights.

The DWP is far from exempt. According to official statistics published in February, the DWP administered the claims of some 24 million benefits claimants (August 2023 to August 2024). In March this year, Andrew Weston MP, Minister for Transformation at the DWP, stated in Parliament that over an unspecified period:

‘58 automations have been deployed across the DWP, with 38 of them currently active. These automation processes have handled a total of 44.46 million claims and saved 3.4 million staff hours.’

It is imperative that these processes are appropriately scrutinised, particularly in the context of long-standing concerns about societal gender and racial biases being embedded in AI machine-learning models (Caliskan et al, 2017).

Examples of automation and AI at the DWP

There are a range of available definitions of ‘AI’ and ‘automated decision making’ (ADM). Broadly, AI refers to computer systems designed to operate with some autonomy, using inputs and trained content to generate outputs. ADM refers to algorithmic processes used to make decisions, either with input from a human (semi-ADM or ‘human-in-the-loop’ system) or without any human input. ‘Automation’ might refer to AI or ADM but may also refer to any use of technology more generally to replace work previously undertaken by a human.

The DWP has been deploying a variety of digital tools to assist with the administration of claims, decision-making processes and fraud and error detection. Since around 2022, the DWP has predicted which fresh claims for advance payments could be fraudulent through the Advances Model, a prediction-based machine learning tool trained on DWP data about past fraudulent claims for Universal Credit (UC) advance payments. The algorithm assigns claimants a risk score, flagging those above the relevant threshold for review by DWP staff.

Other tools used include data-matching systems, which highlight discrepancies between data held by the DWP and other government agencies to identify fraud and error. The DWP also operates the Housing Benefit Award Accuracy Initiative, a risk scoring tool which aims to gauge the accuracy of housing benefit awards (DWP, 2025). The Initiative generates a risk score for every housing benefit recipient, whereby claimants are profiled, flagged, and a list of claimants sent to local authorities for review. Finally, the DWP’s Integrated Risk and Intelligence Service (IRIS), which is assisted by some of the models and tools outlined above, flags cases identified as having a high fraud risk for review by the risk review team.

Further tools are being piloted for future use, including machine learning tools to detect other forms of fraud and error in UC around self-employed earnings, housing, living together and capital. However, the full extent of DWP’s implementation of these technologies and processes is not publicly known, much to the concern of over 30 civil society organisations which have recently raised concerns about a lack of transparency on this issue (Public Law Project et al, 2023).

Lack of transparency

Opacity remains a key obstacle against understanding how the DWP is operating these tools. As of September 2025, the Public Law Project’s Tracking Automated Government (TAG) register has recorded nine known automated tools used for welfare administration. Six of these tools are classed as ‘low transparency’, with minimal public information known about them, and three are classed as ‘medium transparency’. The lack of transparency also extends to opacity as to how the models themselves operate. Child Poverty Action Group’s 2023 report, You Reap What You Code, highlights that the DWP has not made the computer code for the digital UC system publicly available, despite this being a requirement of the UK government Service Standard led by the Government Digital Service.

Potential avenues for legal challenge

There are likely to be a range of legal challenges that can be brought as a result of AI and ADM, particularly where their usage conflicts with public law principles, the Equality Act 2010, the Human Rights Act 1998 and data protection laws.

First, an inability to explain a decision may give rise to a number of procedural challenges, such as a failure to provide an explainable decision, with adequate reasons, where required by law or where fairness requires. In addition to the requirement for explainability under the GDPR, benefits regulations also provide that claimants are entitled to receive a written explanation of decisions which carry a right of appeal. Further, the Public Sector Equality Duty requires public authorities to have ‘due regard’ to equality considerations in public decision-making processes. Unless this has expressly been taken into account in the formulation of the relevant algorithms, it will be difficult for the DWP to demonstrate due regard, particularly if it cannot explain how a decision was made. In fact, a lack of transparency about whether or not a decision has been taken using ADM processes may itself give rise to a challenge.

Second, a lack of accountability and oversight of ADM systems by a human decision-maker may amount to an unlawful fettering of discretion or an unlawful delegation of authority. For instance, the Social Security Act 1998, s 2, permits the use of computers in making decisions (or determinations on the way to a decision), but presumes human responsibility for the operation of those computers. Of course, this Act was passed in 1998, long before the mainstream usage of AI and large language models. However, if tools are developed in which decisions arise from fully automated processes with limited human oversight, then this may indicate a lack of responsibility contrary to statute. Claimants may also object to being subjected to solely automated decisions under art 22 GDPR, although whether a decision arises from a solely automated process may be disputed.

Finally, the substance of the decision may be flawed by an inherent bias, giving rise to a claim of discrimination under the Equality Act or Article 14, read with Article 1, Protocol 1, or an irrational consideration rendering it unlawful. In principle, there could be systemic challenges concerning the profiling of claimants who are flagged for review or suspected of fraud. For instance, the DWP’s fairness analyses indicate bias in relation to those who are detected by the Advances Model, with statistically significant referral and outcome disparities based on age, reported illness, nationality, and couple status (Big Brother Watch, 2025).

Making decisions well

The social security system is an essential lifeline for millions of people. It is imperative for these millions that social security decisions are not just made efficiently but are made well, transparently and in a manner consistent with equality, data protection and human rights laws. 

The DWP has been deploying a variety of digital tools since around 2022. The full extent is not publicly known. The Algorithmic Transparency Reporting Standard is now mandatory for all government departments – see Government Digital Service Blog, 8 May 2025.


References and links

Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science. 2017 Apr 14;356(6334):183-186. doi: 10.1126/science.aal4230. PMID: 28408601

DWP, ‘A7/2025 Housing Benefit Award Accuracy Initiative for the financial year ending March 2026’, Updated July 2025

Public Law Project et al, Key principles for an alternative AI white paper, June 2023

Public Law Project, Tracking Automated Government (TAG) Register 

Child Poverty Action Group, You Reap What You Code

UK Government, Service Standard 12: Make new source code open

Big Brother Watch, Suspicion by Design: What we know about the DWP’s algorithmic black box and what it tries to hide, July 2025