The Trump administration rescinded Biden's AI safety order sixteen months ago. Now it is building the same thing back, under a different name, because the threat model changed.
On January 20, 2025, one of Trump's first executive actions was rescinding Biden's AI safety order, the one requiring frontier labs to submit safety test results to the government before deployment. Sixteen months later, his National Economic Council director went on television and said AI models should be "released in the wild after they've been proven safe, just like an FDA drug."
The reversal is path dependence, not hypocrisy.
What changed was Anthropic's Mythos model. When a frontier system demonstrated the ability to find decades-old software vulnerabilities that human auditors had missed, the national security establishment stopped treating AI safety as a culture war and started treating it as a defense priority. The question shifted from whether the government should regulate AI to whether it could afford not to know what these systems can do before they ship.
The infrastructure already exists. NIST's Center for AI Standards and Innovation has completed more than forty model evaluations, including unreleased state-of-the-art systems. On May 5, Google DeepMind, Microsoft, and xAI signed new pre-deployment testing agreements, joining OpenAI and Anthropic. All five major frontier labs now submit to voluntary government review before releasing their most capable models. Meta, the largest open-weight model provider, has no agreement in place.
The voluntary framework is the scaffolding. The executive order is the permanent structure. Hassett's public framing is not a trial balloon. It is a declared intention backed by forty completed evaluations and five signed agreements. The question is no longer whether the administration will act, but how broad the mandate will be. Hassett initially said the order would cover "really quite likely all AI companies." Within forty-eight hours, Chief of Staff Susie Wiles posted that the White House is "not in the business of picking winners and losers." The internal friction between security hawks and deregulation advocates played out in public, in real time.
The FDA comparison is more instructive than Hassett may intend. The Medical Device Amendments took fourteen years from Kennedy's first legislative proposal in 1962 to Ford's signature in 1976. The original 510(k) pathway required only notification, not approval. Full enforcement of pre-market approval for legacy devices was not mandated until 1990, another fourteen years. The administration is attempting to build in weeks by executive order what Congress needed twenty-eight years to complete for medical devices. The speed reveals the pressure. Executive orders are fast but fragile. The next president can rescind this one as easily as Trump rescinded Biden's.
But the political alignment underneath is unusually durable. Senator Blackburn's 291-page TRUMP AMERICA AI Act proposes an Advanced AI Evaluation Program at the Department of Energy, structurally parallel to what CAISI already operates at NIST. Democrats want regulation. National security hawks want vetting after Mythos. Both factions looked at what frontier models can do and decided ignorance is the greater risk. California has already acted unilaterally. Governor Newsom signed an executive order in March 2026 requiring AI vendors selling to state agencies to meet certification standards covering illegal content, harmful bias, and civil rights safeguards.
The Conviction
The Trump administration will formalize mandatory pre-release government evaluation of frontier AI models through executive action before June 30, 2026.
The mechanism will be procurement, not regulation. Without legislation, the administration's strongest lever is government contracts. Require CAISI evaluation as a condition of selling to federal agencies, then watch the standard propagate through the supply chain as prime contractors impose identical requirements on their vendors. This is how the Defense Department's cybersecurity maturity model became a de facto industry standard without Congress passing a single new law. The government does not need to regulate every AI company. It only needs to be a large enough customer that its procurement rules become everyone's rules.
The winners are already visible. CAISI expands from voluntary evaluator to mandatory gatekeeper, inheriting the institutional gravity that the FDA accumulated over decades. Safety evaluation companies gain a captive market. Labs with existing government relationships have compliance infrastructure in place. The losers are frontier labs competing on deployment speed, open-source projects if scope extends beyond the largest models, and any company whose timeline assumes they can ship before a competitor responds.
The deeper structural consequence runs past the executive order itself. Once a pre-release evaluation infrastructure exists, it does not get dismantled. Bureaucratic expansion is a ratchet. The FDA never shrank after mandating pre-market approval enforcement for legacy devices in 1990. CAISI, with forty evaluations completed and five major labs already participating, has crossed the threshold from experiment to institution.
The national security case for pre-release evaluation is substrate-independent. It does not care which president signs the order or which party controls Congress. The capability is the argument. Mythos did not change the politics. It changed the threat model. Threat models are not subject to repeal.
If no executive action or formal pre-release evaluation mandate materializes by June 30, 2026, this conviction is wrong.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)