Anthropic Shifts Data Policy: Claude Users Must Opt Out to Prevent Conversations From Training AI Models

Anthropic Shifts Data Policy: Claude Users Must Opt Out to Prevent Conversations From Training AI Models

Anthropic is introducing major changes to its data policies, requiring all Claude users to decide by September 28 whether their conversations can be used to train future AI models. The update marks a sharp shift from the company’s previous stance of not using consumer chat data for training purposes.

Under the new policy, conversations and coding sessions from users of Claude Free, Pro, Max, and Claude Code may be retained for up to five years unless individuals explicitly opt out. Previously, Anthropic automatically deleted user prompts and responses within 30 days, except in cases of policy violations, which extended retention to two years. Enterprise customers — including those using Claude Gov, Claude for Work, Claude for Education, and API services — remain exempt, mirroring OpenAI’s approach of protecting business data from training use.

In its blog post explaining the changes, Anthropic emphasized user choice, saying that by allowing data collection, people will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” The company added that such participation will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”

While Anthropic frames the move as a way for users to contribute to safer and smarter models, industry analysts point to a deeper motivation: the need for vast amounts of high-quality conversational data to remain competitive with rivals like OpenAI and Google. With AI development accelerating, real-world chat data has become a critical resource for refining models.

The shift also comes amid wider scrutiny of data retention in the AI sector. OpenAI, for example, is currently fighting a U.S. court order requiring it to preserve all consumer ChatGPT conversations indefinitely, including deleted ones — a move that COO Brad Lightcap criticized as “a sweeping and unnecessary demand” conflicting with user privacy commitments.

Critics warn that confusing consent flows risk leaving users unaware of what they’ve agreed to. For existing Claude users, the policy update appears in a pop-up with a bold “Accept” button, while the toggle to disable data sharing sits in smaller print — and is switched on by default.

Privacy experts caution that this design could mislead users, echoing Federal Trade Commission warnings that burying critical disclosures may violate consumer protection standards. Whether regulators will act on this practice remains uncertain.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch