
Microsoft has clarified that the artificial intelligence models developed by Anthropic—including its widely used chatbot Claude—will continue to be available to customers through Microsoft’s platforms, despite the U.S. Department of Defence labelling the AI company a supply-chain risk. The only exception, Microsoft said, will apply to projects directly linked to the Pentagon.
The confirmation follows a recent decision by the U.S. Defence Department to classify Anthropic as a supply-chain risk after a dispute over how its AI systems could be used in military operations. The designation effectively prevents government contractors from using Anthropic’s technology in projects connected to the Department of Defence.
Despite the government’s action, Microsoft stated that its legal team carefully reviewed the ruling and concluded that Anthropic’s tools can still be offered to most customers. The company said the models will remain accessible through platforms such as Microsoft 365, GitHub, and Microsoft’s AI development environment, allowing enterprises and developers to continue using the technology for non-defence applications.
This makes Microsoft one of the first major technology companies to publicly confirm it will maintain its relationship with Anthropic following the Pentagon’s decision. The company emphasized that the restriction is narrowly focused and does not prevent businesses, startups, or other organizations from continuing to use Claude through its services.
The dispute between the Pentagon and Anthropic stems from disagreements over the ethical boundaries of artificial intelligence in military contexts. Anthropic has maintained strict policies limiting the use of its AI models for certain applications, including mass domestic surveillance and fully autonomous weapons systems, arguing that such uses pose serious ethical and safety risks.
These safeguards reportedly conflicted with the U.S. military’s expectations for broader access to advanced AI technologies. As negotiations between the company and defence officials failed to resolve the disagreement, the Pentagon moved forward with the supply-chain risk designation, which requires defence contractors to phase out the use of Anthropic’s technology in military programs.
Even with the defense restriction, industry observers note that Anthropic’s broader commercial ecosystem remains largely unaffected. Many businesses rely on Claude for enterprise AI tasks such as coding assistance, document analysis, and workflow automation. Because these applications fall outside military contracts, companies can continue using the technology without interruption.
The situation highlights the growing tension between governments and AI developers over how powerful generative models should be used in national-security contexts. As artificial intelligence becomes increasingly important for defence, intelligence, and cybersecurity operations, the debate over ethical safeguards versus strategic flexibility is expected to intensify.
For now, Microsoft’s decision ensures that the Claude ecosystem remains widely accessible across its cloud and productivity platforms—while keeping it out of projects tied directly to the U.S. defence establishment.




