Critical Infrastructure Bottlenecks Reshape AI Deployment Economics
PJM's emergency request for 15 gigawatts — equivalent to 15 large nuclear reactors — crystallises how infrastructure capacity now determines AI competitive position more than capital availability or technical capability. The grid operator explicitly declared that training demand is outpacing infrastructure response faster than market mechanisms can resolve, forcing interventions outside normal procurement channels. This dynamic extends across the stack: Japan's $16 billion Rapidus bet attempts to overcome semiconductor manufacturing's entrenched network effects through state capital alone, while US export controls collapse under 30% BIS staff attrition and 120-day licensing backlogs that incentivise gray-market distribution. CoreWeave's capture of multibillion-dollar commitments from both Anthropic and Meta reflects its position arbitraging credit access and NVIDIA supply chain priority — a middleman role that exists precisely because hyperscalers remain capacity-constrained despite unlimited capital.
The strategic significance compounds across decision cycles: companies securing dedicated power generation gain multi-year advantages over grid-dependent competitors, while governments pursuing chip sovereignty discover that $16 billion in subsidies cannot shortcut the tacit knowledge and process debugging required for competitive yields. Amazon's public positioning of custom silicon as NVIDIA substitute rather than complement signals that vertical integration into hardware becomes preferable when external dependencies create bottlenecks. The pattern suggests infrastructure availability, not algorithmic capability or capital, increasingly determines who can deploy AI at scale.