Brief summary

The Online Safety Act 2023 aims to make the UK ‘the safest place to be online’, yet allows platforms to rely heavily on AI moderation systems to meet their statutory duties. This risks leaving serious harm unrecognised and regulatory enforcement weakened or delayed.

How and why did you pick this topic?

I was struck by the growing reliance on automated systems to assess online harm, without clear visibility over how those decisions are reached. 

When the law depends on processes it cannot properly scrutinise, accountability begins to erode. I wanted to examine how this gap arises within the Online Safety Act 2023, and whether greater transparency could be introduced without undermining the Act’s objectives. 

The proposal ultimately focused on a simple question with far-reaching consequences: how can the law regulate harm it cannot see?

The full essay is reproduced below.