
The U.S. Department of Defense may allow limited continued use of artificial intelligence tools developed by Anthropic beyond a previously announced six-month phase-out period, according to an internal Pentagon memo circulated to senior military leaders. The document suggests that exemptions could be granted in exceptional cases where the technology is considered critical to national security operations.
The memo, dated March 6 and signed by Pentagon Chief Information Officer Kirsten Davies, states that exemptions would only be approved in “rare and extraordinary circumstances.” These exceptions would apply strictly to mission-critical activities directly supporting national security operations when no suitable alternative technology is available. Any Pentagon unit seeking such an exemption would be required to submit a detailed risk-mitigation plan for review and approval.
The guidance follows an earlier decision by the Pentagon to designate Anthropic as a supply-chain risk and order the removal of its technology from defense systems and contractor networks. Under that directive, defense contractors were given 30 days to notify relevant partners and up to 180 days to confirm full compliance with the phase-out.
Despite the directive, the memo acknowledges that removing Anthropic’s software completely from defense systems could be challenging, particularly in cases where the company’s technology is embedded within broader software supply chains or open-source components. Experts note that this complexity may lead to multiple exemption requests from organizations that rely on such tools as part of larger technology infrastructures.
The document also instructs officials to prioritize removing Anthropic’s products from highly sensitive systems, including those linked to nuclear weapons infrastructure and ballistic missile defense programs. These systems are considered critical national security assets where the Pentagon aims to eliminate potential supply-chain risks as quickly as possible.
The policy development comes amid an ongoing dispute between Anthropic and the U.S. government over restrictions on how the company’s AI models can be used by the military. The Pentagon’s classification of the company as a supply-chain risk has led to legal action from Anthropic, which is attempting to challenge the government’s decision in court.
The situation reflects growing tensions between governments and artificial intelligence developers regarding the role of AI in national security. As AI technologies become increasingly integrated into defense systems and digital infrastructure, questions related to ethical use, supply-chain security, and operational oversight are likely to play a larger role in shaping future collaborations between technology companies and government agencies.




