
Anthropic CEO Dario Amodei has publicly declined a request from the Pentagon seeking unrestricted access to the company’s artificial intelligence systems, stating that he “cannot in good conscience accede to [the Pentagon’s] request.”
In a written statement, Amodei said, “Anthropic understands that the Department of War, not private companies, makes military decisions. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
He specified two categories of concern: mass surveillance of Americans and the deployment of fully autonomous weapons systems operating without human oversight. While the Pentagon maintains that it should be able to use Anthropic’s AI models for any lawful purpose and that such decisions should not be constrained by a private company, Anthropic has drawn clear boundaries around these applications.
According to reports, the Department of Defense has signaled that if Anthropic does not comply, it could designate the company as a supply chain risk—a classification typically applied to foreign adversaries—or invoke the Defense Production Act (DPA). The DPA grants the president authority to compel companies to prioritize or expand production in support of national defense.
Amodei highlighted what he described as an inconsistency in these possible actions, stating, “One labels us a security risk; the other labels Claude as essential to national security.”
He emphasized that the Department of Defense has the discretion to select contractors aligned with its strategic priorities but expressed hope that officials would reconsider their stance. “Given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” he said.
Anthropic is currently the only leading AI lab with systems cleared for classified military use, though reports suggest the Department of Defense is preparing xAI as an alternative provider.
“Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place,” Amodei stated. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
With the deadline imminent, the standoff underscores growing tensions between AI developers and government agencies over the ethical and operational boundaries of advanced artificial intelligence in military applications.




