On 4 November 2025, the UK High Court handed down its much-anticipated ruling in Getty Images v Stability AI (see here), with Stability AI scoring a significant victory with potentially significant ramifications.
Nonetheless, many unanswered questions remain with respect to how the IP regime manages the training and output of AI platforms, with the interests of AI developers being balanced against those of rights holders.
Background
Getty Images was founded in 1995 and is a pre-eminent global visual content creator and marketplace. Its business involves the licensing of millions of Visual Assets including photographs, video footage and illustrations, as well as audio assets, to individuals and business users, such as newspapers, magazines, production companies, advertising agencies, banks, airlines, insurance companies and pharmaceutical companies, in more than 200 countries worldwide. Collectively, the content exists in a sophisticated curated database.
The content is primarily obtained by Getty by acquiring the copyright or obtaining licenses from the copyright holders and made available through various websites (including gettyimages.com and istock.com). Getty have several registered trade mark rights for GETTY IMAGES and ISTOCK.
Stability AI are a company specialising in generative AI tools for media generation and editing. One of Stability AI’s platforms is called Stable Diffusion, an AI platform incorporating model weights (numerical parameters used to ‘train’ the platform) enabling users to generate images using certain prompts. The platform was developed in Germany and launched in August 2022, with the model weights being made available to downloaded in the UK via a website called ‘Hugging Face’.
Getty initiated proceedings on 16 January 2023, alleging that Stability AI had infringed their copyright and trade mark rights via the ‘scraping’ of Getty’s database to create and train the Stable Diffusion platform. Furthermore, some of the output produced by the Stable Diffusion platform also infringed upon their trade mark rights, particularly where Getty’s ‘watermarks’ (marks used to protect digital images/content, protected as registered trade marks) has been reproduced.
After significant case management, the issues to be decided at trial were substantially narrowed given that the Stable Diffusion platform was developed outside of the UK using non-UK cloud servers.
The key matters to be decided at trial were as follows:
A. Was the Stable Diffusion platform an ‘article’ that was an ‘infringing copy’ imported into the UK, that would have amounted to infringement of Getty’s copyright had it been made in the UK, thus amounting to secondary infringement
B. Had Stability infringed the GETTY IMAGES or ISTOCK trade marks under sections 10(1), 10(2) or 10(3) Trade Marks Act due to the presence of watermarks in certain Stable Diffusion outputs identified by Getty?
Secondary Copyright Infringement
Secondary copyright infringement concerns the importation, possession of, or dealing with an "article" that is an "infringing copy" of a copyright work. Essentially, it is concerned with the ‘downstream’ dealings of infringing works.
Getty had tried to argue that the model weights used to train the Stable Diffusion platform were an “article” that was an “infringing copy” of their copyright materials, given that their materials were used in the training process, subsequently being imported and distributed into the UK; therefore, falling foul of sections 22 and 23 of the Copyright, Designs and Patents Act 1988.
Whilst the Judge concluded that an AI Tool could be an ‘article’ for the purposes of secondary copyright infringement, it could not be an infringing copy unless the AI model itself has at some point contained a copy of the copyright works used to train it.
In this case, there was no copying for the purposes of training the Stable Diffusion platform in the UK. Instead, the copyright protected works were used in the training of the platform (and for altering the model weights); however, the platform did not store any of the copyright materials once it had been trained.
Subsequently, Getty’s claim for secondary infringement failed.
Trade Mark Infringement Claims
Getty initiated claims of trade mark infringement after finding that outputs from the Stable Diffusion platform contained versions of their GETTY IMAGES and ISTOCK watermarks. Getty generated various examples by using several categories of prompts.
In response, Stability had argued that Getty had been unable to show that any UK user had encountered an image generated with a watermark that infringed upon Getty’s registered trade mark rights; nor had Getty demonstrated that consumers would use the prompts that Getty had used to produce the examples of output bearing watermarks.
Whilst the judge concluded that it was impossible to quantify exactly how many UK users had encountered Getty’s watermarks and was not prepared to accept their section 10(3) claim, they concluded that Getty had successfully made out a case for trade mark infringement under sections 10(1) and 10(2) with respect to some of the examples they had produced. In some instances, watermarks had been produced identically, whereas in other instances the watermarks were produced with slight alterations.
The criteria for ‘post-sale’ confusion recently set out in Iconix v Dream Pairs (see discussion in our previous article here) proved to be pivotal, given that there could be no confusion at the point of sale (i.e., when the Stable Diffusion platform was accessed). Nonetheless, when confronted with a Getty watermark on any of Stable Diffusion’s output, the judge concluded that there was a possibility that the average consumer would assume that there was some connection, agreement or arrangement with Getty (when in fact this was not the case).
Nevertheless, the judge acknowledged that their findings were ‘historic’ but ‘extremely limited in scope’ given that it was impossible to quantify the extent of the infringement on the facts.
Concluding remarks
The decision affirms that the mere training of an AI model using copyright works, without storing or reproducing those works in the model itself, does not amount to secondary copyright infringement under UK law.
In the present case, Getty may well explore litigious options in other territories, given that they could not show that any ‘primary’ acts of copyright infringement had occurred in the UK (or at least were not confident enough to take these claims to trial). The case serves as a reminder of the potential complexities of IP disputes, particularly where activities span across multiple jurisdictions.
The findings in relation to trade mark infringement serve as a warning to AI platforms that they may be liable for infringement when output from their platforms reproduces registered trade mark rights (even when the output is primary down to user input). Nonetheless, it is easy to imagine that AI platforms could be trained to steer clear of producing output that infringes upon registered trade mark rights (particularly registered trade mark rights that incorporate words and logos).
Whilst the decision will no doubt impact upon future copyright and trade mark cases centred around AI platforms, there are still many unanswered questions. Getty ultimately dropped their primary copyright infringement claims pre-trial and, had the training of the AI model taken place in the UK (with the copying of the copyright material taking place in the UK), the present case could have been very different. The UK government continues to consult creative industries and other stakeholders as it grapples with the possibility of reform. Keeping all parties happy in this complex sphere will be extremely difficult.