Aaron Trebble, IP legal director at Lewis Silkin.
Copyright infringement
The seemingly unstoppable rise of generative AI systems (which can be trained to create new text, images and other content) has sparked extensive legal debate in many areas, including the potential for copyright infringement which might occur:
- during AI model training, where copies of third-party texts or images may be used to develop and test the system; and
- when the AI model is instructed to create a new work, which could reproduce the whole or part of the third-party works on which it was trained.
The practical risk will depend on the technical features of the AI system and how it is configured. For example, training could be limited to the use of public domain and licensed works, and in any case may involve the system processing data relating to the works rather than permanently storing the works themselves. However, in the latter case, even temporary reproduction of any works could infringe copyright (unless they are public domain or used under licence).
How could this be avoided?
After training, a system could be set up, for example, to avoid copying the style of work on which it is trained. Due to the processing steps involved, it may also be difficult in some cases to identify the relevant part of any original work allegedly copied to create the new one.
If copying occurs, specific defences may be hard to find under UK law. The position may be different in other jurisdictions not considered in this article, such as the US, where a less restrictive doctrine of “fair use” exists.
Sections 28A and 29A of the Copyright Designs and Patents Act 1988 (CDPA) are most likely to be relevant to generative AI system training. These permit, respectively:
- the making of transient or incidental copies of a work to enable lawful use of it; and
- computational analysis of a work’s contents for solely non-commercial research (so-called text and data mining, or ‘TDM’). However, for those training commercial AI systems, it may be difficult to rely on these exceptions.
For more specific applications of a built AI system to create outputs that mimic the style of a particular artist, a defence of pastiche under Section 30A of the CDPA could be available, but to date there is limited case law to assist with the interpretation of this defence.
A landmark case
IP specialists are closely following Getty Images (US) Inc v Stability AI Ltd (High Court claim no. IL-2023-000007). If this case reaches trial, the judgment will be the first to consider the infringement of copyright (as well as database right and trade marks) relating to the use of AI in the UK.
A key part of Stability AI’s defence appears to be that the AI system training was completed outside the UK, and therefore did not infringe UK copyright (which, like all IP rights, is territorial in scope). This factual point notwithstanding, it is expected that the judgment will provide valuable guidance on a range of issues including primary and secondary infringement of copyright and the available defences.
Whilst this case rumbles on, the EU Artificial Intelligence Act (2021/0106(COD)) will soon come into force. Among its wide-ranging measures, this act will oblige those providing AI services in the EU to comply with EU copyright law. The EU takes a more permissive approach to TDM than the UK and permits this for commercial purposes, unless the copyright owner has opted-out.
The act will require suppliers to implement measures to respect any such opt-out. This means UK AI developers will need to respect opt-outs if they later want to release their system into the EU. AI providers will also need to disclose the data used to train their systems, which could provide important transparency for copyright owners as to the use being made of their works.
IP lawyers and their clients can hopefully look forward to some valuable judicial guidance on the infringement risks involved in the use of AI. This will help to crystalise some unresolved legal issues in this challenging and interesting field. For now, the general advice remains that any business looking to work with generative AI should establish a written policy designed to identify and manage the risks of inadvertently infringing third party IP, and seek to allocate risk appropriately in supplier and customer contracts.