Copyright and AI in the UK: the balancing act

Representation of AI
(Image credit: Shutterstock)

It is no secret that copyright-protected creative works (including newspaper articles, novels, music and images) are being used to train generative AI models. The issues are complex, but the battle lines are clearly drawn. Creatives are lobbying governments to protect their rights, in what many see as an existential threat to the future of creativity itself. The widely publicized Statement on AI Training, with over 30,000 signatories including high-profile writers, actors, and academics has brought growing public attention to the perspective of creators on the topic.

On the other side, AI companies are pushing for maximum freedoms to allow algorithms to train on existing material to ‘turbo-charge’ innovation. Microsoft CEO Satya Nadella likened AI training to learning a topic from a textbook, arguing that companies should be given free rights over data to train their models. Like many countries, the UK government is in the spotlight as it works to see how it can reconcile the conflicting interests of groups that want to shape the legislation governing this rapidly developing area.

Rajvinder Jagdev

Partner at Powell Gilbert.

What is the current UK position?

To date the UK has taken a light touch approach to the intellectual property issues surrounding artificial intelligence. For instance, the Copyright, Designs and Patents Act 1988 (CDPA), drafted well over thirty years ago, is still the primary source of legislation for this area. The CDPA gives rights to owners of copyright to prevent original creative works being copied, distributed, or performed without permission. Although the CDPA has been amended over the years, it has not yet been updated to account for the AI age. This means that as it stands, unauthorized copying of protected works is not allowed for the purposes of training AI models for commercial purposes. This is in contrast to the EU position, where copying is allowed for commercial purposes unless the rights holder has opted out, and the US, where AI developers can seek to rely on the “fair use” exemption.

In practice, enforcing this restriction in an AI context is challenging. For a start, it is hard to know if any particular work is being used without access to the training data set used for each system. Even if it is established that the copyright-protected works were used in the context of training an AI model, a rights holder still must establish that copying of that work has occurred in the jurisdiction. To have a chance of success in such proceedings, it is essential for legal practitioners to properly understand the technology underlying the allegedly infringing AI model. Although training data is necessarily copied initially (e.g. into RAM), in most cases once it has been fed in, the AI model does not store a copy of the raw data. Instead, the AI’s neural network evolves in response to the training data. Without access to records, it is challenging to establish the identity of the data set (and any protected works) used in training, although in some cases tell-tale features in the output may provide clues.

Both of these issues of copying and jurisdiction are in dispute in Getty v Stability AI, where Getty has asserted that Stability AI has infringed their IP rights, both through the alleged use of its images as training data, and the generated image outputs that bear the Getty watermark. The trial is due to take place in June 2025 and it will be interesting to see how these issues are addressed by the UK Court.

Across the pond, similar cases are pending, including the parallel US proceedings in the Getty case. The New York Times has brought a claim against OpenAI in the New York District Courts, including a demand for the destruction of AI models that have used its content as training data. The outcome of these cases could drastically impact the relationship between AI companies and news outlets with respect to copyright.

How might UK policy change?

It is expected that the UK government will address some of the contentious topics surrounding AI in the soon to be published Artificial Intelligence Opportunities Action Plan. This is likely to propose changes to the CDPA to address the use of copyright-protected works to train AI models. The UK Prime Minister, Sir Keir Starmer, has indicated in a recent statement that the Action Plan will include rights for publishers to maintain control over, and be paid for, content that is used for training. These changes are long awaited. The issue of content creators’ rights was debated in Parliament in 2021 following a private member’s bill initiated by Labour MP Kevin Brennan. This bill proposed rights to remuneration for creators and a transparency obligation that would provide authors with the right to be informed about how their works are being used.

The Brennan bill was inspired by the EU Directive on Copyright in the Digital Single Market adopted in April 2019 and, in particular, the provision on authors’ contracts specifying that authors are entitled to receive appropriate remuneration where they have transferred exclusive rights for the exploitation of their works. Although the Brennan bill was not progressed, the government committed to examining the question of how to secure revenue for creators and the issue was considered as part of an AI and IP consultation.

The previous government’s response to the consultation, published in June 2022, stated that a new copyright and database exception would be introduced to allow text and data mining for any purpose, including for training AI models. Creators were not to be given a right to opt-out but would enjoy certain safeguards, allowing them the right to remuneration, for example, via subscriptions allowing lawful access to their works through a platform of their choice. The Prime Minister’s comments indicate that this remains the direction of travel for legislation in the UK.

Given the uncertainty surrounding content-holders’ rights at this stage, some major rights-holders are pre-empting legislative change by opting to enter into licensing agreements with AI companies. For example, the Financial Times in the UK, Axel Springer in Germany, and Conde Nast in the US have entered into commercial agreements with OpenAI, each reportedly worth tens of millions of pounds per year. These deals see OpenAI given access to the publications’ content, paying a flat fee for historic content plus an ongoing annual fee for new content. This trend has emerged over the past year, and it is expected that other media outlets may follow suit.

Any changes proposed to this area of law will be significant for the UK IP landscape. So far, copyright law has evolved slowly in response to changing technology. Legislators have preferred to implement incremental changes to the existing law, although many legal practitioners have been calling for more holistic changes. The reforms needed to address the challenges raised by AI may well trigger such changes and it will be interesting to see how the upcoming AI Opportunities Action Plan addresses the issue.

In addition to copyright reforms, the EU recently enacted the EU AI Act, which governs amongst other things transparency and reporting requirements in relation to AI models. This legislation is the first of its kind in the world. Post-Brexit, stakeholders will be interested to see whether or not the UK will diverge from the EU position and signal a different direction for UK AI and copyright law. Either way, the Action Plan will bring some much welcomed clarity in this fast-developing area.

We've featured the best plagiarism checker.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://anngonsaigon.site/news/submit-your-story-to-techradar-pro

Partner at Powell Gilbert.