
The United States National Security Agency (NSA) is reportedly continuing to use advanced artificial intelligence tools developed by Anthropic, even after the Pentagon imposed restrictions on the company over national security concerns. The development highlights ongoing tensions between government agencies over the role of AI in defense and cybersecurity operations.
At the center of the issue is Anthropic’s “Mythos Preview” model, a highly advanced AI system known for its strong capabilities in coding and autonomous task execution. According to reports, the tool is already being used more widely within the NSA, despite the Department of Defense labeling Anthropic as a “supply chain risk” and pushing for a phase-out of its technology across federal systems.
The Pentagon’s earlier action against Anthropic stems from a broader dispute over how AI tools should be used in military and surveillance contexts. The company had resisted allowing unrestricted use of its models, particularly in areas such as autonomous weapons and mass surveillance, leading to a breakdown in relations and subsequent restrictions by defense authorities.
Despite these restrictions, the continued use of Anthropic’s technology by the NSA underscores the critical role such AI systems play in national security. Agencies appear to be prioritizing operational effectiveness and cybersecurity capabilities, especially as advanced AI models like Mythos are believed to have the potential to identify vulnerabilities and enhance cyber defense strategies.
The situation also reflects a larger challenge facing governments worldwide: balancing rapid AI adoption with security, ethical, and regulatory concerns. As AI systems become more powerful and integral to defense infrastructure, policymakers are increasingly forced to navigate trade-offs between innovation, control, and risk management in high-stakes environments.




