Infrastructure bottlenecks shift from chips to power and cooling as physical constraints bind
Multiple developments signal that energy access and thermal management now constrain AI deployment more than semiconductor supply. SoftBank is building 10GW of dedicated generation capacity alongside its Ohio data centre. Nvidia and Emerald AI are partnering with power companies on flexible facilities that can modulate consumption. OpenAI is negotiating to purchase fusion power output directly from Helion. These moves indicate that AI infrastructure strategy now prioritises securing power before securing chips — a fundamental reordering of deployment dependencies.
Simultaneously, thermal constraints are becoming the primary limit on semiconductor scaling. Nvidia's next-generation racks will consume 160kW and require full liquid cooling, matching power density that makes air cooling obsolete. Heat flux exceeding 1,000 W/cm² in advanced chips creates metrology gaps that constrain performance validation. The combined effect is that AI clusters face physical density limits independent of Moore's Law, forcing operators to build more facilities rather than pack more compute into existing footprints.