CAISI expands model reviews with DeepMind, Microsoft, xAI
The U.S. Commerce Department’s Center for AI Standards and Innovation (CAISI) announced new agreements with Google DeepMind, Microsoft, and xAI that let the government evaluate frontier AI models before public release. The program focuses on national-security-related risks, especially cybersecurity, biosecurity, and chemical weapons misuse, and it builds on earlier voluntary arrangements with OpenAI and Anthropic. CAISI says it has already completed more than 40 evaluations, including on unreleased models, and will continue post-deployment assessment and targeted research as part of its broader AI safety mandate.
This is less a product launch than a policy checkpoint: the government is turning frontier model review into a standing workflow, and the biggest signal is that the major labs are now treating pre-release scrutiny as table stakes.
- –The agreement effectively normalizes pre-deployment government access to leading AI models, which could become an industry standard.
- –The focus is narrow but consequential: cybersecurity, biosecurity, and chemical weapons risk are the main categories being screened.
- –It is explicitly voluntary, so the practical impact depends on how many labs opt in and how much weight the feedback carries.
- –This builds on prior arrangements with OpenAI and Anthropic, suggesting CAISI is consolidating a broader review regime rather than creating a one-off deal.
- –The downside is obvious: more oversight may slow launches and raise questions about who gets to define “national security risk.”
DISCOVERED
3h ago
2026-05-06
PUBLISHED
5h ago
2026-05-05
RELEVANCE
AUTHOR
Merchant_Lawrence