Physical Infrastructure Now Bottlenecks AI Scaling
Half of planned US data centre construction has stalled not because of chip shortages or funding gaps, but because transformers, switchgear, and electrical distribution equipment — primarily sourced from China — cannot be delivered fast enough. Hyperscalers are responding by committing capital to build dedicated natural gas power plants, accepting long-term energy price exposure and regulatory risk rather than waiting for grid capacity. This represents a fundamental mismatch between software industry capital allocation speed and the multi-year lead times required for physical infrastructure. The constraint is structural: electrical equipment manufacturers operate on different timelines than cloud providers, and domestic alternatives to Chinese suppliers remain limited.
Memory costs tell a parallel story. HBM now consumes 30% of AI data centre spending, quadrupling since 2023, as physical chip packaging and thermal management become as strategically significant as transistor architecture. Nvidia's preferential access to HBM supply at below-market rates compounds its architectural lead, creating a two-tier market where alternative accelerators face higher effective costs regardless of silicon performance. Meanwhile, co-packaged optics decisions are hardening into decade-long architectural commitments before workload patterns have stabilised. The lesson is clear: optimising for today's software abstractions while ignoring the physical layer creates lock-in to infrastructure choices that may not align with tomorrow's requirements.