Meta Wins Copyright Battle Over AI Book Training, but Judge Leaves Door Open for Future Challenges

In a closely watched case, a U.S. federal judge sided with Meta Platforms on Wednesday, dismissing a copyright lawsuit brought by a group of authors who claimed the tech giant used their books without consent to train its Llama AI system.

Judge Vince Chhabria of the U.S. District Court in San Francisco ruled that the authors failed to provide sufficient evidence that Meta’s AI usage caused harm to the market for their work—an essential argument under U.S. copyright law.

However, the ruling was far from an endorsement of Meta’s actions. Chhabria clarified that his decision was limited in scope:

“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”

This nuanced decision comes just days after a separate federal judge, William Alsup, ruled in favor of AI company Anthropic, determining that its use of copyrighted materials qualified as “fair use.” The conflicting judgments highlight growing legal uncertainty around how copyright law applies to generative AI training.

The authors, represented by Boies Schiller Flexner, criticized the ruling. A spokesperson said the firm strongly disagreed with the outcome, especially given the “undisputed record” of what they called Meta’s “historically unprecedented pirating” of copyrighted works.

Meanwhile, Meta welcomed the ruling, with a company spokesperson calling fair use a “vital legal framework” for developing “transformative” AI technologies.

The lawsuit, filed in 2023, accused Meta of leveraging pirated versions of copyrighted books without authorization or compensation to train Llama, its open-source large language model. It’s one of several copyright disputes emerging across the AI industry, with similar cases pending against OpenAI, Microsoft, and Anthropic.

The legal crux of these lawsuits centers on the fair use doctrine, which allows limited use of copyrighted material without permission in specific contexts. Tech companies argue that training AI on such materials results in new, transformative outputs, thus qualifying as fair use. Creators, on the other hand, say their works are being copied and repurposed into content that competes directly with their original creations, threatening their livelihoods.

While Chhabria ultimately ruled against the authors in this case, he acknowledged the broader concern about the impact of generative AI on creative industries.

“By training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,” he said.

With legal precedent still forming and multiple cases underway, the battle over AI and copyright law is far from over—and this ruling may only be the beginning of a larger showdown.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Sign Up for CXO Digital Pulse Newsletters to Download the Research Report

Sign Up for CXO Digital Pulse Newsletters to Download the Coffee Table Book

Sign Up for CXO Digital Pulse Newsletters to Download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch