The Market Draws a Line Between AI Earners and AI Spenders
The Q1 2026 earnings cycle produced an unusually clean verdict. Alphabet, Amazon, and Microsoft were all rewarded by markets after reporting AI infrastructure investment constrained by capacity rather than demand — meaning their returns would have been higher with more hardware online, validating continued aggressive spend. Meta was penalised despite record revenue growth precisely because its $125–145 billion capex revision, accompanied by an explicit acknowledgement of higher component pricing, could not be anchored to an equivalent enterprise monetisation story. The 20 million paid Microsoft Copilot users and Google Cloud's $20 billion quarterly milestone gave investors concrete throughput metrics; Meta offered a build-out rationale without a comparable demand signal.
The supply-side implication is at least as significant as the investor reaction. Samsung's 48-fold chip profit surge and Murata's data center-driven beat confirm that the component stack from passive components through HBM is in a genuine supply-demand imbalance that is extracting rents from every hyperscaler simultaneously. Meta's disclosure names this dynamic explicitly. For infrastructure planners, two parallel constraints are now confirmed: GPU and HBM scarcity at the accelerator layer, and — as evidenced by Meta's multi-billion-dollar Graviton5 deal with AWS — high-core-count CPU availability for agentic inference workloads. The capex cycle is not decelerating; it is accelerating into a tighter and more expensive component environment.