
Google, Microsoft, and Elon Musk’s xAI have joined a US government-led AI review program that allows federal authorities to evaluate advanced AI models before they are released to the public. The initiative is aimed at assessing potential risks associated with increasingly powerful AI systems, particularly in areas like cybersecurity and national security.
Under the program, the companies will provide early access to their AI models to the Centre for AI Standards and Innovation (CAISI), a body under the US Department of Commerce. This enables regulators to conduct “pre-deployment evaluations” and analyze the capabilities, limitations, and potential misuse of frontier AI technologies before they reach the market.
The move expands an existing framework that previously included companies like OpenAI and Anthropic, indicating a broader push toward industry-wide cooperation on AI safety. Government agencies have already conducted dozens of such evaluations, including tests on unreleased models to identify vulnerabilities and security gaps.
A central goal of the initiative is to prevent misuse of AI in high-risk domains such as cyberattacks, biosecurity threats, and critical infrastructure disruption. By stress-testing models before launch, authorities aim to identify weaknesses—such as the ability to bypass safeguards or generate harmful outputs—and ensure they are addressed early.
Overall, the program reflects a growing shift toward collaborative governance in artificial intelligence. Rather than imposing strict regulations, governments are increasingly working with leading AI companies to balance innovation with safety, ensuring that powerful new technologies are deployed responsibly while minimizing potential risks.




