In a closely watched case, a U.S. federal judge sided with Meta Platforms on Wednesday, dismissing a copyright lawsuit brought by a group of authors who claimed the tech giant used their books without consent to train its Llama AI system.
Judge Vince Chhabria of the U.S. District Court in San Francisco ruled that the authors failed to provide sufficient evidence that Meta’s AI usage caused harm to the market for their work—an essential argument under U.S. copyright law.
However, the ruling was far from an endorsement of Meta’s actions. Chhabria clarified that his decision was limited in scope:
“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”
This nuanced decision comes just days after a separate federal judge, William Alsup, ruled in favor of AI company Anthropic, determining that its use of copyrighted materials qualified as “fair use.” The conflicting judgments highlight growing legal uncertainty around how copyright law applies to generative AI training.
The authors, represented by Boies Schiller Flexner, criticized the ruling. A spokesperson said the firm strongly disagreed with the outcome, especially given the “undisputed record” of what they called Meta’s “historically unprecedented pirating” of copyrighted works.
Meanwhile, Meta welcomed the ruling, with a company spokesperson calling fair use a “vital legal framework” for developing “transformative” AI technologies.
The lawsuit, filed in 2023, accused Meta of leveraging pirated versions of copyrighted books without authorization or compensation to train Llama, its open-source large language model. It’s one of several copyright disputes emerging across the AI industry, with similar cases pending against OpenAI, Microsoft, and Anthropic.
The legal crux of these lawsuits centers on the fair use doctrine, which allows limited use of copyrighted material without permission in specific contexts. Tech companies argue that training AI on such materials results in new, transformative outputs, thus qualifying as fair use. Creators, on the other hand, say their works are being copied and repurposed into content that competes directly with their original creations, threatening their livelihoods.
While Chhabria ultimately ruled against the authors in this case, he acknowledged the broader concern about the impact of generative AI on creative industries.
“By training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,” he said.
With legal precedent still forming and multiple cases underway, the battle over AI and copyright law is far from over—and this ruling may only be the beginning of a larger showdown.