Governments Assert Direct Control Over AI as Critical Infrastructure
The Anthropic-Pentagon confrontation and China's OpenClaw restrictions signal a fundamental shift in how major powers treat AI companies—not as regulated commercial entities but as critical infrastructure requiring direct operational control. The Trump administration's use of supply chain risk designations as enforcement mechanisms, combined with China's administrative bans on state use of foreign agentic AI, reveal governments converging on treating AI deployment as state security matters requiring emergency powers rather than traditional lawmaking. This creates severe regulatory uncertainty for AI firms operating across jurisdictions, as safety commitments designed to be universal now face governments demanding conflicting compliance.
The pattern extends beyond policy to physical infrastructure. Iran's bombing of Gulf datacenters establishes AI computing capacity as a legitimate military target, forcing nations hosting significant AI infrastructure to account for these facilities as contested assets in interstate conflict. This will drive geographic distribution of workloads, hardening of critical sites with air defense systems, and reconsideration of where AI infrastructure sits relative to potential adversaries—particularly acute for countries positioning as regional AI hubs through hyperscaler partnerships.