From Silicon to Orbit: The Race to Own the AI Infrastructure Stack
Three distinct vertical integration moves converged this week. NVIDIA committed $300 million to Corning to secure domestic optical fibre supply, extending its stack from GPU silicon through networking ASICs and proprietary transport protocols to the physical cable plant. SpaceX filed permits for the $119 billion Terafab facility, articulating an ambition to manufacture chips for its own Starshield, Starlink, and xAI operations — eliminating dependence on TSMC and Nvidia simultaneously. And xAI's Colossus 1 data centre, originally built for Grok training, is now generating external revenue by renting its full 300MW to Anthropic, validating the thesis that xAI's most defensible business may be neocloud infrastructure rather than model competition.
The cumulative effect is a market structure in which the largest AI infrastructure participants are racing to own their critical inputs rather than procure them commercially. For hyperscalers and independent AI labs, this trend is double-edged: it creates more supply options in the near term as new entrants build capacity, but it progressively erodes the commodity nature of compute, interconnect, and chip supply as those inputs get locked into proprietary ecosystems. AMD's bifurcated results — record data centre CPU revenues alongside declining consumer and gaming guidance — confirm that capital and capacity are flowing toward the vertically integrated AI infrastructure segment at the direct expense of standardised commercial markets.