Private Access Decisions Are Writing AI Governance by Default
Anthropic's tiered release of Mythos — first to major cloud platforms, now to UK banks and potentially US federal agencies — is hardening into a governance architecture by commercial default. No jurisdiction has established legally binding criteria for who may access a model assessed as too dangerous for general release, what due diligence is required, or what liability attaches to institutional deployers. The UK's AI Safety Institute has remained publicly silent on Mythos risk. The White House is engaging Anthropic through ad hoc executive branch meetings rather than any formal procurement or oversight channel. The result is that Anthropic itself holds the most consequential governance lever: who gets access and on what terms.
The EU AI Act's parallel vulnerability in the Omnibus trilogue reinforces the pattern. The headline legislation is enacted, but its substantive content is being renegotiated in low-scrutiny technical negotiations, just as commercial access decisions are outpacing regulatory readiness elsewhere. CDT Europe's warning that both Parliament and Council risk weakening fundamental rights protections signals that even the world's most comprehensive AI governance framework is being shaped by processes that receive far less public attention than the original drafting. Across jurisdictions, the governance gap between what is being deployed and what is formally regulated is widening faster than legislative processes can close it.