Recent years have seen the emergence of new technological tools which promise to change the way we work and live. English courts were busy in 2023 applying copyright law to generative AI systems, blockchain formats and graphic user interfaces. The subject matter is raising some big questions over what copyright protects, what it should protect, and how it should do it.

Copyright law gives authors a long-lasting monopoly over the copying of their works. It does not protect ideas, but protects the identifiable expression of ideas, or ‘works’. Two key characteristics must be present for a ‘work’ to be protectable by copyright as a literary or artistic work: originality, and specificity.

A work must be original in the sense of being its author’s ‘own intellectual creation’: Case C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECR I-6569. The author must have been able to express their creative abilities in the production of the work by making free and creative choices so as to stamp the work created with their personal touch.

The work must also be expressed in a manner which makes it identifiable with sufficient precision and objectivity, so that authorities and individuals can identify clearly and precisely what the work is. English law requires that a work be ‘fixed’ by being ‘recorded, in writing or otherwise’ before copyright will subsist in it: s 3(2) of the Copyright, Designs and Patents Act 1988 (the CDPA).

Fixation seems easy to show for digital ‘works’ – after all, a digital asset is ‘recorded’ as soon as it is loaded into a device’s memory. But originality is trickier – is a work truly original if it is ‘created’ not by a human but by a computer, or made from building blocks designed by someone else? Or does that question ignore the fact that all creativity, human or not, builds on things that already exist?

Recent case law

In Wright v BTC Core Dr Wright, who claims to be Bitcoin’s creator Satoshi Nakamoto, is seeking to enforce his claimed copyright in the Bitcoin File Format as a literary work. At first instance the High Court ([2023] EWHC 222 (Ch)) decided that Wright had not demonstrated the fixation of the Bitcoin Format and so had no real prospect of establishing that copyright subsisted in it. However, Arnold LJ held on appeal ([2023] EWCA Civ 868) that the High Court had confused the fixation of a work with the work itself. Copyright in a literary work protects the work as an intangible abstraction; it does not protect the particular tangible medium in which that work may happen to have been fixed. There is no need to show a causal chain between that fixation and any infringing copying of the work. There is not even a need that the fixation be permanent. All Wright needs to be able to show is that the Bitcoin File Format has been completely and unambiguously recorded at some time.

This can give odd results. As Arnold LJ pointed out, an author can improvise an original literary work orally before an audience, and if someone else records it on their phone the work is ‘fixed’ for copyright purposes. A member of the audience performing the work from memory could be infringing the author’s copyright, even if they – and the author – have no idea that the work had been recorded. But until there’s a fixation, there’s no copyright protection.

As to originality, as long as the test is satisfied even a very low amount of creativity can be enough to make something original. In THJ Systems v Sheridan [2023] EWCA Civ 1354, a software system produced risk and price charts designed using pre-designed components, like building from Lego bricks. The design of the charts was original enough for copyright protection as a graphic work. The degree of visual creativity was low, but there was some creativity in setting out the design. Low creativity does not rule out copyright protection. It does, however, narrow the scope of protection so that only a close copy would infringe. The test for originality is an objective one, in which artistic merit is not relevant. Nor is functionality of the software. But while software code, and the layout of the design of visual elements, can be protected by copyright, the functionality, ideas and principles underlying elements of it cannot – because a ‘work’ is an expression of an idea, not the idea itself.

Deep learning AI models, which are ‘trained’ by ingesting large quantities of information and generate new content in response to user prompts, are the subject of litigation worldwide for alleged copyright infringements both in their ‘training’ and in their output. One high-profile case concerns the Stable Diffusion image generation platform. Getty Images asserts that Stable Diffusion was trained using Getty’s collection of high-quality images and rich metadata, and that the training infringed Getty’s rights. The High Court considered in Getty Images v Stability AI [2023] EWHC 3090 (Ch) how – and if – the law applies to infringing copies delivered digitally with no physical medium.

Knowingly selling, letting for hire or importing an ‘article which is … an infringing copy’ into the UK can be ‘secondary’ infringement of copyright. Getty complains that Stable Diffusion is an infringing copy brought into the UK. Stability argues that Stable Diffusion is software made available on a website and not a physical tangible ‘article’. If it is not an ‘article’, they argue, then it falls outside the statutory definition of secondary infringement.

Joanna Smith J refused to give summary judgment on this point, but noted that reproduction of a work for the CDPA can be in any material form, including ‘storing the work in any medium by electronic means’. If Stability’s argument was right, there would be secondary infringement if a copy of Stable Diffusion was brought into the UK on a memory stick, but no secondary infringement if the same software was delivered via the cloud, because no physical object had been imported. Now that most digital content is delivered either online or via the cloud, the question of whether an infringing copy must be tangible makes a very big difference to the risk of infringement in the UK by importing digital content made overseas.

Different jurisdictions

As to the output of generative AI systems, different jurisdictions take different views on whether copyright should protect AI-generated content at all. The U.S. Copyright Office has refused to register a wholly computer-generated image because ‘authorship’ requires human creation, but it allows works created with the assistance of AI to be registered with only the human-created parts being protectable. The Beijing Internet Court, however, has ruled that AI generated content can be protected by copyright because the originality in the prompt provided by the human means that the resulting output is the individual expression of that human author.

Digital tools and concepts move incredibly fast, and the law can struggle to keep up. English courts at least are never slow to address new questions with the tools they have. Originality and specificity remain central to what copyright does and how it works, and for the technical side of the digital world the existing tools are holding up. For the philosophical side – whether human input is needed to make something original – time (and legislation) will tell. 

© Rafael Henrique/SOPA Images/Shutterstock, Budrul Chukrut/SOPA Images/Shutterstock

The High Court considered in Getty Images v Stability AI [2023] EWHC 3090 (Ch) how – and if – the law applies to infringing copies delivered digitally with no physical medium.