
Anthropic has accused three Chinese artificial intelligence companies—DeepSeek, Moonshot AI, and MiniMax—of conducting coordinated “industrial-scale” distillation campaigns aimed at extracting knowledge from its Claude models.
In a statement issued Monday, the San Francisco-based AI firm alleged that the three companies generated more than 16 million exchanges with Claude using approximately 24,000 fraudulently created accounts. According to Anthropic, the firms relied on commercial proxy services to bypass access restrictions, enabling networks running tens of thousands of Claude accounts simultaneously, despite commercial access limitations in China.
The company described the activity as a “distillation attack,” in which large volumes of specially crafted prompts are used to extract specific capabilities from a more advanced model. The outputs are then used to train smaller proprietary models or as input for reinforcement learning systems, allowing competitors to replicate high-level performance without incurring the full cost of foundational model development.
Anthropic said that once access was secured, the firms generated extensive prompt volumes designed to elicit targeted responses. Among the three, MiniMax was alleged to have driven the highest volume of traffic, accounting for over 13 million exchanges.
Distillation is a widely used technique in the AI industry, enabling developers to create smaller, more efficient models that mimic the capabilities of larger systems. Anthropic acknowledged that AI labs routinely distill their own models internally. However, the company expressed concern that external actors using unauthorized access could gain powerful capabilities “in a fraction of the time, and at a fraction of the cost,” required for independent development.
Anthropic’s allegations follow similar claims from OpenAI. Earlier this month, OpenAI CEO Sam Altman submitted a letter to U.S. lawmakers citing activity “indicative of ongoing attempts” by DeepSeek to distill frontier models from OpenAI and other U.S.-based AI labs, including through obfuscated methods.
Both Anthropic and OpenAI have framed the alleged large-scale distillation efforts as potential national security concerns, highlighting the growing geopolitical tensions surrounding advanced AI development and intellectual property protection.




