In a new escalation of tensions in the AI industry, Anthropic has accused OpenAI of breaching its terms of service and has partially restricted its access to the Claude series of AI models via API. The move comes amid growing competition, especially ahead of OpenAI’s anticipated release of GPT-5, which is expected to deliver major improvements in code generation.
OpenAI had been granted API access to Claude models for standard industry practices such as benchmarking and safety evaluations. However, according to a report by Wired, Anthropic alleges that OpenAI’s technical staff went beyond these permitted uses—specifically by interacting with Claude Code, Anthropic’s AI coding assistant, in ways that violated its commercial terms.
“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service,” said Christopher Nulty, Anthropic’s spokesperson.
Anthropic’s terms prohibit users from employing its models to “build a competing product or service, including to train competing AI models,” or to “reverse engineer or duplicate” its technology. The company confirmed it will maintain API access for OpenAI only for benchmarking and safety purposes, which it described as an industry-standard practice.
Anthropic CEO Dario Amodei, who co-founded the company after leaving OpenAI, emphasized the importance of integrity in leadership: “I think trust is really important… Technically if you’re working for someone whose motivations are not sincere… you’re just contributing to something bad,” he said in a recent podcast.
Responding to the allegations, OpenAI’s communications chief Hannah Wong said, “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
This isn’t the first time Anthropic has acted to safeguard its tech. Last month, the company restricted access to its models from Windsurf, an AI coding startup OpenAI was reportedly planning to acquire. That deal ultimately collapsed after Google allegedly hired Windsurf’s top team in a $2.4 billion move.
Anthropic also recently introduced new weekly usage limits for Claude Code, citing abuse by users running the tool “continuously in the background 24/7.” Earlier this year, OpenAI itself made similar claims against Chinese firm DeepSeek for using repeated queries in what it described as unauthorized model training.