
The United States government is preparing new guidelines that would impose stricter rules on artificial intelligence companies seeking federal contracts. The move comes amid a growing dispute between the Trump administration and AI startup Anthropic over the acceptable use of AI systems by government agencies.
Under a draft, AI companies hoping to work with the U.S. government would be required to grant authorities an irrevocable license to use their AI systems for any lawful purpose. The proposed rules are part of a broader effort to strengthen how the government procures AI services and technologies.
The guidance is reportedly being developed by the General Services Administration (GSA) and would initially apply to civilian government contracts. However, the framework is expected to mirror similar measures that the Pentagon is considering for military contracts involving artificial intelligence technologies.
The policy effort follows escalating tensions between the U.S. government and Anthropic. The Pentagon recently labeled the company a “supply-chain risk,” which effectively prevents government contractors from using Anthropic’s technology in work related to the U.S. military. The designation came after months of disagreements over the company’s safety restrictions embedded in its AI systems.
Anthropic has insisted on strict safeguards limiting how its AI models can be used, particularly in areas such as surveillance and military operations. U.S. defense officials argued that the restrictions went too far and interfered with potential national security applications of the technology.
According to the draft guidelines, companies would also be required to ensure that their AI systems do not intentionally embed partisan or ideological biases in their outputs. In addition, firms would need to disclose whether their models have been modified to comply with foreign regulatory frameworks or commercial compliance requirements.
Commenting on the dispute, Josh Gruenbaum, commissioner of the Federal Acquisition Service, said it “would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic.” He added that the agency had terminated Anthropic’s OneGov contract, removing the company’s products from pre-negotiated government procurement channels.
The White House did not immediately respond to requests for comment regarding the proposed guidelines.
The development highlights the increasing tension between technology companies advocating strict AI safeguards and governments seeking broader flexibility to deploy the technology for security and operational purposes. As artificial intelligence becomes more central to national infrastructure and defense systems, the debate over how it should be regulated and deployed is expected to intensify.




