The Menu

Fair Use or Fair Warning? What the Anthropic and Meta Copyright Rulings Really Mean for AI Training

Fair Use or Fair Warning? What the Anthropic and Meta Copyright Rulings Really Mean for AI Training

Fair Use or Fair Warning? What the Anthropic and Meta Copyright Rulings Really Mean for AI Training

Max Fairuse III, Esq.

Wednesday, June 25, 2025

Wednesday, June 25, 2025

Two major AI developers recently won courtroom victories, sort of. In rulings that could reshape how copyright law applies to AI, judges sided with Anthropic and Meta on fair use grounds for model training. But the wins come with caveats, and the lawsuits are far from over.

Two major AI developers recently won courtroom victories, sort of. In rulings that could reshape how copyright law applies to AI, judges sided with Anthropic and Meta on fair use grounds for model training. But the wins come with caveats, and the lawsuits are far from over.

Two major AI developers recently won courtroom victories, sort of. In rulings that could reshape how copyright law applies to AI, judges sided with Anthropic and Meta on fair use grounds for model training. But the wins come with caveats, and the lawsuits are far from over.

Two rulings in federal court this week have begun to sketch the legal boundaries of training AI models on copyrighted material — and the picture is complex.

In Anthropic, U.S. District Judge William Alsup held that using lawfully obtained books to train a generative AI model constitutes "fair use," so long as the books were digitized properly and used for genuinely transformative purposes. Alsup emphasized that training large language models (LLMs) on books to create something fundamentally new is in line with copyright’s core purpose: promoting creativity.

However, the court declined to dismiss claims over Anthropic’s alleged use of pirated books. The plaintiffs, represented by the Authors Guild and several authors, including Andrea Bartz, claim the company downloaded millions of unauthorized works to build what it internally dubbed a “central library of all the books in the world.” That portion of the case will proceed to trial in December, with the potential for substantial statutory damages.

Just days later, in Silverman v. Meta, U.S. District Judge Vince Chhabria granted summary judgment in favor of Meta, dismissing a lawsuit brought by Sarah Silverman and other authors over the company’s use of their books to train its Llama models. Crucially, however, Chhabria’s decision rested not on a finding that Meta’s actions were lawful, but on the plaintiffs’ failure to present convincing evidence — particularly on the issue of market harm.

“[The] plaintiffs made the wrong arguments and failed to develop a record in support of the right one,” Chhabria wrote. He noted that a strong line of attack — showing how AI-generated content might cannibalize markets for human-created works — was largely neglected. Nonetheless, the judge underscored that copying copyrighted materials to train AI systems will, in many cases, be illegal unless companies obtain permission or pay to license the content.

A Fragmented Legal Framework

Taken together, the rulings create a patchwork precedent. If AI developers obtain books legally and use them in a transformative way, as in Anthropic’s case, courts may view the practice as fair use. However, training on pirated material remains a clear legal risk, as both courts indicated — and as Anthropic will soon face in trial.

Yet the Meta ruling adds another layer: even if a company uses copyrighted content without permission, legal success may hinge more on whether plaintiffs can show market harm than on whether infringement occurred in the conventional sense. The ruling effectively cautions creators and advocacy groups that future copyright claims will require detailed evidence and precise legal framing.

The Stakes Are Rising

The implications are enormous. As the AI industry barrels toward trillion-dollar valuations, companies are under growing pressure to train models faster and at scale — often turning to large corpuses of books, code, and media. But these early rulings suggest courts are open to drawing lines: between fair use and theft, between transformative purpose and replication, and between lawful acquisition and piracy.

Perhaps most importantly, both judges rejected the notion that copyright enforcement would stifle AI innovation. If copyrighted material is necessary for model performance, they argue, then paying creators is simply a cost of doing business — not a barrier to progress.

As more copyright cases head to court, including those involving OpenAI, Microsoft, and Midjourney, these rulings hint at the future: transformative use might remain protected, but AI companies relying on unlicensed or pirated data should prepare for liability — and the eventual expectation to pay.

TLDR:

Courts sided with Anthropic and Meta in key copyright rulings, but neither decision fully absolves AI firms. Training on legally obtained content may qualify as fair use if it's transformative, but using pirated material or failing to show market harm remains legally risky. These early cases signal that AI developers will likely need to license copyrighted works in the future.

Copyright © 2025 - EmetX Inc. - All rights reserved

Copyright © 2025 - EmetX Inc. - All rights reserved

Copyright © 2025 - EmetX Inc. - All rights reserved