The Hidden Constraints Beneath the $1 Trillion AI Build
The Goldman Sachs $1 trillion projection frames AI infrastructure spending as a durable multi-year obligation, and the SoftBank $40 billion syndicated loan for its OpenAI stake puts that thesis to a live capital markets test. But two distinct challenges are accumulating beneath this consensus. The first is physical: copper, the essential conductor for data centre power and grid transmission infrastructure, faces a structural US supply shortfall that domestic projects like Rio Tinto's Resolution mine cannot bridge on any near-term timeline due to permitting and capital constraints. This creates a non-semiconductor chokepoint that is absent from most capex models — whoever wins the model race still depends on a commodity supply chain dominated by Chile and China.
The second challenge is analytical. Scaling sceptics argue that LLMs are approaching a data ceiling and that diminishing returns from additional compute spend will erode the economic justification for the current rate of data centre construction. EDA revenue data showing concentration in advanced-node AI chip design — at the expense of legacy tools — and NPU architecture analysis revealing data movement as the binding inference constraint both suggest that the efficiency frontier is shifting in ways that complicate straightforward extrapolation of compute demand. These two views — structural capex durability versus physical and architectural limits — are not yet reconciled in market pricing, and the gap between them is the most consequential unresolved question in AI investment strategy.