DEV Community

Cover image for The Regulator Always Arrives... even to AI
Maksym Mosiura
Maksym Mosiura

Posted on

The Regulator Always Arrives... even to AI

AI just had its regulation week

I've been watching this pattern long enough to recognize it. Tech gets big -> tech gets scary -> governments show up. Last week was the week AI got the visit.

Five days, four headlines, one direction.

What actually happened

May 4. The New York Times reported the Trump White House is drafting an executive order to create a working group that would review frontier AI models before they ship. Officials from Anthropic, Google, and OpenAI got briefed the week before. Bloomberg confirmed it that same evening.

May 5. NIST announced that Google, Microsoft, and xAI have agreed to share unreleased models with CAISI (the Center for AI Standards and Innovation, sitting under the Commerce Department) before launch. CAISI has already run more than 40 model evaluations. The OpenAI and Anthropic agreements signed in 2024 were renegotiated to reflect new Commerce directives.

May 7. The EU Council and Parliament reached a provisional deal to streamline the AI Act ahead of August 2026, when the rules for high-risk systems take effect. Brussels also published draft guidelines for Article 50 transparency obligations that same week.

So in five days: a US administration that campaigned on deregulating AI moved toward pre-launch review. Three of the biggest labs voluntarily handed over their unreleased models. And the EU finalized the world's most comprehensive AI rulebook. All in the same week.

The trigger has a name

Mythos. Anthropic's model. They claim it's "far ahead" on cybersecurity capability, restricted access to a vetted few, briefed senior officials, and declined to release it publicly. April is when the rumor turned into a White House meeting.

What changed isn't the rhetoric or the lobbying. What changed is that a frontier lab said, in writing, that they didn't feel safe shipping their own product. When the people building the thing tell you it's too dangerous to release, the political conversation stops being about whether to regulate. It becomes about who gets the keys.

The pattern, with dates

I keep coming back to how predictable this is.

Social media had its 2004 to 2018 run. Facebook launched in 2004. Cambridge Analytica broke in March 2018. GDPR came into force in May 2018, the DSA followed in 2022. Roughly 14 years from launch to enforceable rules.

Crypto had its 2009 to 2023 run. Bitcoin whitepaper in 2008, network live in January 2009. FTX collapsed in November 2022. MiCA passed in the EU in May 2023. The SEC went on the offensive that summer. About 14 years again.

AI is having its 2022 to 2026 run. ChatGPT shipped in November 2022. CAISI was running 40+ evaluations by April 2026. Pre-launch model review is being drafted as I'm writing this. Three and a half years.

Same cycle. Faster clock ⏰.

Where I think this goes

A few predictions, with rough timelines. I'll be wrong on at least one.

By end of 2026: a US executive order formalizing the model review group. Voluntary today, mandatory for any lab above some compute threshold inside 18 months. The threshold will get argued about in Congress and won't matter much, because the three labs that count are already participating.

By mid-2027: the first enforcement action. Probably not against a US frontier lab. More likely against an open-weight release from a Chinese lab or a smaller US shop that ignored the framework. The case will be framed around national security, not consumer harm. That framing will stick.

By end of 2027: insurance gets involved. You won't be able to deploy a frontier model in a regulated sector without a CAISI evaluation on file, the same way you can't run a hospital without HIPAA paperwork. Compliance officers become the second-largest line item in AI deployment budgets after compute.

The thing nobody's pricing in yet:

the open-source side gets squeezed first.

If Meta keeps releasing Llama weights and one of them gets cleanly traced to a cyberattack, the pressure to require pre-release review for open-weight releases will be politically impossible to resist. That fight is coming, and the labs releasing weights know it.

What it means if you're building

Three things worth internalizing.

The voluntary phase is the easy phase. If you're building on a frontier model, your dependency just became a regulated input. Contract terms get longer. Pricing gets stranger. Release schedules get less predictable.

Evaluations are an artifact now, not a vibe. Labs that already built reproducible red-team pipelines are about to have a structural advantage. Everyone else will be retrofitting. If you're a smaller lab or a deployer, start writing your evals down now.

National security is the lens that wins. Not bias, not jobs, not copyright. The framing that moved a deregulatory White House off its position was cyber capability. That framing will shape the next round of rules, and it favors big labs with classified relationships over open-weight ecosystems.

The takeaway

Social media got its rules. Crypto got its rules. AI just started getting them, four times faster than either. The interesting question isn't whether the rules are coming. They're here. The question is whether they get written carefully, or in response to the first incident bad enough to force the issue.

Top comments (0)