The Case That Was Supposed to Define AI Copyright — And Didn't
Advertisement
On 4 November 2025, the High Court of England and Wales handed down its long-awaited judgment in Getty Images (US) Inc & Ors v Stability AI Limited [2025] EWHC 2863 (Ch). Within hours, headlines declared a decisive victory for AI developers. Social media posts celebrated. Reddit threads announced the end of the copyright threat to generative AI.
Almost all of it was wrong.
Advertisement
The ruling — 205 pages delivered by Mrs Justice Joanna Smith DBE — is genuinely significant. It is the first UK judgment to directly examine how copyright and trademark law apply to generative AI models. But what most coverage missed, buried, or got entirely backwards is this: the most important question in the entire case was never answered. Getty dropped its headline claim before a verdict could be reached.
This case analysis cuts through the noise. It explains what the court actually decided, what it explicitly left open, and — most critically — what it means if you are a founder building AI products in the UK right now.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
Background: Two Companies, One Fundamental Question
Advertisement
Getty Images
Getty Images is one of the world's largest visual content providers. Founded in 1995 and publicly traded on the NYSE, Getty licenses access to hundreds of millions of photographs, videos, and illustrations contributed by approximately 600,000 creators worldwide. Its images are widely identifiable by their distinctive "GETTY IMAGES" and "ISTOCK" watermarks, both registered as trademarks in the UK.
Getty's business model depends on licensing. It invests significantly in organising, curating, and distributing content — and it generates revenue when that content is used commercially. Licensing agreements with major technology companies for AI training purposes existed and had been offered to the market before this dispute arose.
Advertisement
Stability AI
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
Stability AI is a UK-registered company founded in 2019. It developed Stable Diffusion, an open-source image synthesis model based on latent diffusion techniques. Stable Diffusion generates synthetic images in response to text and image prompts. It was released publicly in August 2022 and became one of the most widely used generative AI image models in the world.
Stable Diffusion was trained on large-scale datasets scraped from the internet, including what the parties agreed was content from Getty's websites. Critically, as the litigation would reveal, that training took place on cloud computing infrastructure located outside the United Kingdom — specifically on AWS servers in the United States.
Advertisement
The January 2023 Filing
Getty filed proceedings in the UK High Court on 16 January 2023. The claim was broad and aggressive: it alleged primary copyright infringement (through the scraping and use of Getty images to train Stable Diffusion and through infringing outputs), secondary copyright infringement (distribution of an allegedly infringing model in the UK), database right infringement, trademark infringement (based on watermarks appearing in AI-generated outputs), and passing off.
At the time of filing, this was one of the most significant IP actions brought against a generative AI company anywhere in the world.
Advertisement
The Dispute: What Getty Claimed and Why It Narrowed
The Original Claims
Getty's case rested on a core narrative: Stability AI had scraped over 12 million Getty-owned or licensed images — the content Getty describes as the "lifeblood" of its business — to train Stable Diffusion, without seeking or obtaining any licence. Stability AI acknowledged that Getty content appeared in its training datasets but denied infringement.
Advertisement
The full claim map looked like this:
- Training & Development Claim (Primary Copyright): Stability infringed by reproducing Getty images during the training process.
- Output Claim (Primary Copyright): Stable Diffusion generated images substantially similar to protected Getty works.
- Secondary Copyright Infringement: Stable Diffusion itself, as distributed in the UK, constituted an "infringing copy" under sections 22 and 23 of the Copyright, Designs and Patents Act 1988 (CDPA).
- Database Right Infringement: Stability's scraping violated Getty's database rights.
- Trademark Infringement: AI-generated outputs containing Getty and iStock watermarks infringed Getty's registered UK trademarks.
- Passing Off: The same watermark outputs misrepresented Getty's endorsement or affiliation.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
What Happened Before Trial
Advertisement
As the case progressed, it narrowed dramatically. By the time the trial began in June 2025, significant procedural events had already shaped what would and would not be decided.
Most consequentially: Getty accepted that there was no evidence the training and development of Stable Diffusion took place in the United Kingdom. Because UK copyright law is territorial — meaning it applies to acts that occur on UK soil — this acceptance was fatal to the primary copyright claim. UK copyright law cannot, as a general principle, be enforced in respect of acts of copying that occurred in the United States.
Facing this evidential gap, Getty dropped its primary copyright infringement claim and its database right claim shortly before closing submissions. The Output Claim was also withdrawn after Stability demonstrated it had implemented filters preventing the generation of images closely similar to Getty's works.
Advertisement
By the end of trial, only two substantive questions remained for the court:
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
- Did Stability's distribution of Stable Diffusion model weights in the UK constitute secondary copyright infringement under the CDPA?
- Did Stable Diffusion's generation of outputs bearing Getty and iStock watermarks constitute trademark infringement under the Trade Marks Act 1994?
Key Issues: What the Court Actually Decided
Advertisement
Issue 1: Secondary Copyright Infringement — "Infringing Copy" and "Article"
Getty's secondary infringement argument was legally creative. Even though training had occurred outside the UK, Getty argued that by making Stable Diffusion available for download in the UK, Stability had imported an "article" that was an "infringing copy" within the meaning of sections 22, 23, and 27 of the CDPA.
The argument rested on section 27(2) of the CDPA: an article is an "infringing copy" if its making in the UK would have constituted copyright infringement. Getty's position was essentially: the model was made using our images, therefore the model is an infringing copy, regardless of where it was made.
Advertisement
The court made two significant findings:
First — and importantly: The court held, for the first time in UK law, that an "article" for the purposes of secondary copyright infringement under the CDPA can be intangible. A model stored in cloud environments or distributed as downloadable weights is capable of being an "article." This was a doctrinal expansion with long-term implications: it confirms that UK secondary copyright claims are not limited to physical media.
Second — and decisive: Despite the above, the court found that Stable Diffusion was not an "infringing copy." The reason: the CDPA requires an infringing copy to be a copy — meaning it must store, reproduce, or contain the original copyright works in some recognisable form. Stable Diffusion's model weights do not store images. They contain mathematical parameters — numerical weights derived through the training process — that encode statistical patterns. The images themselves, as visual content, are not present in the weights.
Advertisement
Mrs Justice Smith held that section 27 requires the article that is made to be a copy. The fact that the model's creation process may have involved infringing acts does not make the model itself a copy. The model had never stored the Getty images, and never would. Therefore, it was not an infringing copy, and secondary copyright infringement failed.
In plain English: The court found that an AI model is more like a student who studied millions of textbooks and learned to write — not a photocopier that stored the textbooks. The learning process is not the same as the storage.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
Issue 2: Trademark Infringement — The Watermark Problem
Advertisement
Getty's trademark claim was more concrete: certain outputs generated by early versions of Stable Diffusion (specifically v1.x and v2.x) included synthetic watermarks visually similar or identical to Getty's registered "GETTY IMAGES" and "ISTOCK" marks.
The court found limited trademark infringement in respect of these early model versions, when accessed via DreamStudio or Stability's developer platform by UK users.
Three findings of particular significance for founders:
Advertisement
The platform bears liability — not the user. This is the most practically important finding for anyone deploying AI tools commercially. Getty attempted to argue that users who typed prompts were responsible for the infringing outputs. The court rejected this. Because Stability AI controlled the training data, the model architecture, and the output process, it — not the end user — was responsible for the trademark infringement. If you build an AI product that generates outputs containing third-party trademarks, you bear that liability. Your users do not.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
Scope was narrow. The infringement was confined to specific early versions of the model, specific access routes, and specific watermark-like outputs. The court found no evidence of infringement by newer versions (SD XL and v1.6), describing the findings as "historic and extremely limited in scope."
Section 10(3) failed. Getty's broader dilution and unfair advantage claim under section 10(3) of the Trade Marks Act was dismissed for insufficient evidence of detriment or consumer behaviour change.
Advertisement
Passing off was not substantively addressed, as the court did not consider it necessary given the trademark findings.
The Outcome: What Both Sides Claimed, and Who Was Right
Stability AI declared the judgment confirmed that "the copyright concerns that were the core issue" had been resolved in its favour. Getty called it "a significant win for intellectual property owners" — citing the trademark finding and the court's confirmation that Getty's works were used to train Stable Diffusion.
Advertisement
Both characterisations are selective.
The accurate summary:
- Secondary copyright infringement: Dismissed (model weights are not infringing copies)
- Primary copyright infringement: Dropped (no evidence training occurred in the UK)
- Database right infringement: Dropped (linked to primary claim)
- Trademark infringement (early models, specific access routes): Upheld (limited)
- Trademark infringement (later models, section 10(3)): Dismissed
- Passing off: Not decided
Advertisement
The headline question — is it legal to train a generative AI model on copyrighted images in the UK? — was never decided. It cannot be treated as settled law. Getty's abandonment of the primary claim was driven by a failure of evidence, not a legal blessing of the practice.
What This Means for Founders
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
This is where most analysis stops. This is where you need to start.
Advertisement
If You Are Building an AI Model
The primary copyright infringement claim — the one that would have determined whether training on scraped or unlicensed data is lawful in the UK — was dropped because Stability's training happened outside the UK. This creates what some commentators have called a jurisdiction loophole: training your model on infrastructure physically located outside the UK currently insulates you from primary UK copyright claims.
But this is a litigation escape hatch, not a legal safe harbour. The UK court expressly declined to rule on whether UK-based training would be lawful. If you train your model in the UK, you remain fully exposed to a primary copyright claim, and there is currently no case law protecting you.
Advertisement
The practical implication: document your training infrastructure meticulously. Know where your compute runs. If you use global cloud providers, know which regions your training jobs execute in. If Stability AI had kept cleaner records of where training occurred, the jurisdictional question may never have required abandonment of the claim — but the lack of records made the evidence problem worse.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
If You Deploy an AI API or Image Generation Tool
The trademark finding is your most immediate concern. The court confirmed that the platform provider — not the end user — bears liability when AI-generated outputs contain third-party trademarks or branded watermarks.
Advertisement
If your product generates images at scale, you need to ask: what happens if one of those images contains a logo, a watermark, or a branded element?
The answer, post-judgment, is that you — not your user — are the liable party. Stability's partial liability arose because it controlled the training data and had the ability to filter outputs but did not do so consistently across all model versions. Implementing robust watermark and trademark detection pipelines in your output layer is now a documented risk-reduction strategy, not optional.
If You Are Building on Top of Open-Source or Third-Party Models
Advertisement
This is the downstream risk question that almost no analysis addresses. If you fine-tune, deploy, or build products on top of models that may themselves have been trained on contested data, you carry residual risk exposure. The judgment focused on Stability as the model developer. Future cases may examine distributor liability more closely, particularly if model architecture evolves in ways that change how training data is encoded.
Due diligence on your upstream model providers — their training data provenance, filtering practices, and indemnity arrangements — has become a real component of AI product risk management.
If You Are Raising a Seed Round or Approaching Investors
Advertisement
Investors conducting IP due diligence on AI startups will increasingly ask: where was your model trained? How was your dataset sourced? Do you have records? What watermark or trademark filtering exists in your output pipeline?
This judgment provides the factual basis for those questions. The evidentiary gaps that doomed Getty's primary claim are now a documented failure mode that sophisticated investors will seek to avoid. Data provenance is now an investable signal, not a compliance checkbox.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
The UK Legislative Horizon You Cannot Ignore
Advertisement
The ruling operates against a live legislative backdrop. Under the Data (Use and Access) Act 2025, the UK government has a statutory obligation to publish an economic impact assessment and a full report on AI and copyright by 18 March 2026. That report will address whether to introduce a text and data mining exception for commercial AI training — and if so, under what opt-out or licensing framework.
Of 11,500 respondents to the government's copyright and AI consultation, 88% backed stronger copyright protections — not the government's preferred opt-out model, which received just 3% support. The political landscape has shifted toward creators and rights holders. A legislative framework that changes the rules entirely — potentially making unlicensed training actionable regardless of training location — could arrive in 2026 or 2027.
Founders building AI products today are building on ground that may legally shift within their operating horizon.
Advertisement
The Questions This Case Left Open
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
For all its 205 pages, the judgment explicitly does not resolve:
- Whether UK-based scraping and training on copyrighted images constitutes copyright infringement under the CDPA
- Whether UK-based scraping and training on proprietary databases constitutes database right infringement
- Whether the "model weights as patterns rather than copies" reasoning extends to other model architectures, modalities, or future systems that may encode content differently
- Whether an exclusive licensee (as opposed to a copyright owner) has the same standing and the same scope of rights in AI litigation
- How secondary copyright liability would apply if a future model is found to actually store or reproduce copyright works through memorisation or overfitting
Advertisement
The US case between Getty and Stability remains live in the District of California, where different legal frameworks — including the fair use doctrine — apply. Getty has stated it will use findings of fact from the UK judgment in that proceeding. The outcome of the US case could create a directly conflicting precedent.
IP-SAM™ Insight: The Signals This Case Reveals
Powered by IP-SAM™ — Real-time IP risk intelligence for founders
Advertisement
The Getty v. Stability AI judgment is a masterclass in how IP risk accumulates before a dispute becomes visible. Several signals — detectable through systematic monitoring — preceded the litigation and characterise the pattern:
Training data provenance gaps. Stability AI's inability to produce clear documentation of where training occurred, which datasets were used for which model versions, and what filtering had been applied created the evidential void that drove the case's outcome. IP-SAM™ data provenance monitoring flags exactly this type of exposure: the absence of documentation is itself a risk signal.
Watermark reproduction in outputs. The trademark finding arose because early model outputs contained Getty watermark artefacts. This is a measurable, detectable output-layer signal. Systematic scanning of AI-generated outputs for branded elements — trademarks, logos, watermarks, trade dress — is the operational equivalent of what Stability was eventually held liable for failing to do adequately.
Advertisement
Brand confusion at scale. The court's trademark analysis turned on whether UK users encountered watermarked outputs in contexts capable of causing confusion. High-volume generative AI systems create brand exposure at a scale that traditional IP monitoring tools were not built to detect. Real-time output monitoring for third-party brand elements is now a legitimate AI product liability concern, not an abstract legal risk.
Jurisdiction and infrastructure exposure. The territorial dimension of this case — entirely dependent on where servers physically ran training jobs — is an infrastructure risk signal. For founders building internationally, the intersection of IP law and compute geography is now a documented risk category that IP-SAM™ can flag where applicable.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
Lessons: What Founders Should Take Away
Advertisement
1. "Stability AI won" is not a safe harbour. The primary copyright claim was never decided. You cannot rely on this judgment as legal clearance for training on copyrighted content — especially in the UK.
2. Where you train matters more than what you train on — for now. Jurisdiction is currently the decisive variable in UK copyright claims against AI developers. This may not remain true after legislative reform.
3. You are liable for your outputs, not your users. If your AI product generates trademarked content at scale, the responsibility sits with you as the platform provider. Build output filtering accordingly.
Advertisement
4. Documentation is your first line of defence. Training logs, dataset provenance records, filtering pipelines, and version histories are not just good practice. They are the evidence that determines whether a future claimant can even bring a primary copyright case against you.
Need help? Our tools can help you identify potential IP conflicts before they become costly problems. Try a free scan →
5. Watch March 2026. The UK government's statutory report on AI and copyright may reshape the legal landscape within your operating horizon. Build with flexibility.
6. The fight is not over. The US case is live. An appeal of certain findings from the UK case is possible. EU and German courts are reaching different conclusions. This is a rapidly evolving legal environment, not a settled one.
Advertisement
This case analysis is for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All allegations referenced are claims made by the respective parties and have not necessarily been proven or ruled upon by any court or regulatory body. For trademark protection guidance specific to your situation, consult a qualified IP attorney.
Case status and outcomes may change. IPRightsHub may update case analyses where material developments occur.
Powered by IP-SAM™ — Real-time IP risk intelligence for founders.


