Compute & Infrastructure
Top Line
Anthropic has signed a deal to consume all compute at SpaceX-xAI's Colossus 1 data centre and expressed interest in multiple gigawatts of orbital AI compute capacity, marking a structural shift in how frontier AI labs are sourcing capacity outside traditional hyperscaler arrangements.
SpaceX has filed permits for a $55 billion semiconductor fab in rural Texas — branded Terafab — with total investment potentially reaching $119 billion, nearly six times the $20 billion figure disclosed at Musk's March announcement, signalling either a dramatic scope expansion or speculative capital planning.
Denmark's grid operator Energinet has halted new data centre connection requests as accumulated demand on its grid hit 60 GW, making it the latest European nation to impose hard infrastructure limits on AI buildout.
Global semiconductor sales hit $298.5 billion in Q1 2026, putting the industry on track to exceed $1 trillion for the full year — a threshold that would confirm AI-driven demand has structurally re-rated the entire sector.
NVIDIA is investing $300 million in Corning to build three new US optical fibre plants, boosting domestic production capacity by over 50% and extending NVIDIA's vertical integration strategy beyond silicon into the physical interconnect layer.
Key Developments
Anthropic-SpaceX Colossus Deal Reshapes Frontier Compute Sourcing
Anthropic has entered a compute agreement with SpaceX to access the full capacity of the Colossus 1 data centre — originally built by xAI for Grok model training — and has signalled interest in multi-gigawatt orbital compute capacity, according to Bloomberg and Data Centre Dynamics. The arrangement is confirmed as a signed agreement. The orbital compute reference remains in the 'expressed interest' category — no contracts or hardware have been confirmed for that component.
The strategic implication is significant: Anthropic, backed by Amazon and Google, is deliberately diversifying compute supply away from its own investors' infrastructure. Colossus 1's scale — originally reported at 100,000 Hopper-class GPUs — gives Anthropic a large, immediately available inference and training block. The orbital compute angle, while speculative, points toward a longer-term thesis that SpaceX's Starlink constellation could serve as a latency-tolerant distributed compute fabric. For the broader market, this deal signals that frontier AI labs are willing to pay a premium for capacity not controlled by their cloud equity partners, which has direct implications for AWS and Google Cloud's strategic leverage over Anthropic.
Terafab Filing Reveals Dramatic Scope Expansion — or Speculative Ambition
SpaceX has filed for permits for a semiconductor fabrication facility in rural Texas under the Terafab brand, with capital figures in the filing reaching $55 billion and total potential investment cited at $119 billion — versus the $20 billion announced by Musk in March, according to Tom's Hardware. This is a permit filing, not a confirmed construction contract or funded commitment. The gap between the March announcement figure and the filing figures is large enough to warrant scepticism about whether the higher numbers represent firm plans or regulatory envelope-maximising.
If even a fraction of the stated investment materialises, Terafab would represent one of the largest single semiconductor infrastructure commitments in US history, rivalling TSMC's Arizona and Intel's Ohio investments combined. The strategic logic is clear: SpaceX's Starshield, Starlink, and xAI GPU clusters all require chips at scale, and domestic production insulates those programs from export control exposure. However, the absence of a disclosed process node partner, equipment supplier agreements, or technology licensing arrangements means this project is firmly in the speculative column. Permitting timelines for projects of this scale typically run 2-4 years before ground breaks.
European Grid Constraints Harden as Denmark Joins Capacity Freeze
Denmark's Energinet grid operator has suspended new data centre connection requests after accumulated demand applications reached 60 GW against a national grid capacity base that cannot absorb that volume, according to Tom's Hardware. Denmark joins Ireland, the Netherlands, and parts of the UK in imposing hard grid connection freezes, a pattern that is now systemic across Northern European markets that had previously been preferred for data centre buildout due to cool climates and renewable energy availability.
The 60 GW figure demands context: Denmark's total installed generation capacity is approximately 20 GW, meaning the pipeline of data centre requests alone represents three times the country's entire grid. This is not a near-term capacity problem — it reflects speculative land-banking by operators filing connections to secure optionality. Nevertheless, the freeze creates real delays for operators with genuine build plans. The University of Southern Denmark simultaneously brought an AI supercomputer online in Sønderborg with waste heat recovery feeding district heating — a model that demonstrates what compliant, grid-integrated AI infrastructure looks like, and one likely to receive regulatory preference during connection rationing.
NVIDIA Extends Vertical Integration into Optical Fibre with $300M Corning Investment
NVIDIA has committed $300 million to Corning to fund construction of three new US-based optical fibre manufacturing plants, a deal that would increase domestic fibre production capacity by over 50%, according to Tom's Hardware. This is a confirmed investment, not a letter of intent. The strategic framing is explicit: NVIDIA wants its US-based deployment partners — hyperscalers and co-lo operators building NVL-scale clusters — to have reliable domestic fibre supply without exposure to import disruption.
This move complements NVIDIA's Spectrum-X Ethernet MRC protocol, a custom RDMA transport layer already deployed in gigascale AI clusters, which ServeTheHome reports is designed specifically for the latency and bandwidth requirements of multi-thousand GPU training runs. Together, the fibre investment and proprietary network protocol mean NVIDIA is progressively owning the physical and logical interconnect stack surrounding its GPUs — silicon, networking ASICs, transport protocol, and now the cable plant itself. This creates switching costs well beyond the GPU purchase decision.
AMD and the Semiconductor Industry Post Record Numbers Driven Entirely by AI Data Centres
AMD posted record Q1 results driven by data centre CPU demand, with the company already developing Zen 7 architecture and planning a more specialised EPYC portfolio targeting distinct AI and cloud workloads, per Tom's Hardware. AMD simultaneously warned that consumer and gaming revenue will decline in Q2, a divergence that confirms AI infrastructure is now the only structurally growing segment of the semiconductor market. The global semiconductor industry hit $298.5 billion in Q1 sales, according to Semiconductor Industry Association data cited by Tom's Hardware, putting the full year on track to exceed $1 trillion.
Apple's quiet discontinuation of 128GB Mac Studio and earlier 512GB models due to supply constraints — confirmed by Tom's Hardware — illustrates the downstream effect: high-bandwidth memory and advanced packaging capacity is being allocated toward AI accelerators at the expense of consumer products. Arm's guidance simultaneously flagged smartphone market weakness while projecting AI data centre growth as the compensating driver. The pattern is consistent across the supply chain: memory, packaging, advanced nodes, and interconnects are all flowing toward AI infrastructure at the cost of consumer and mobile segments.
Signals & Trends
Crypto-to-AI Infrastructure Conversion Is Accelerating as a Capacity Channel
Two separate transactions this week illustrate the same structural trend: Hut 8 signed a $9.8 billion lease on a Texas AI data centre after pivoting from crypto mining, and Core Scientific acquired a 440MW cryptomine in Muskogee to convert into AI campus capacity. Former Bitcoin mining sites offer three attributes that are scarce in greenfield AI development: existing large grid connections (often 100-500MW), industrial power infrastructure, and rural land with low regulatory friction. The conversion economics are compelling because the grid connection — which can take 3-7 years to secure in constrained markets — already exists. This pipeline of convertible assets represents a meaningful near-term capacity channel that does not appear in traditional data centre construction forecasts, and operators with mining legacies are effectively monetising stranded infrastructure into the highest-value compute use case available.
Sovereign Compute Strategies Are Moving from Policy to Institutional Execution
The UK Semiconductor Centre's appointment of Andy McLean — a veteran of Analog Devices, Texas Instruments, and National Semiconductor — as its first CEO marks the transition from policy framework to operational institution. This follows South Korea's equity market overtaking Canada's, propelled by chip sector valuations, and Alibaba's semiconductor unit driving investor differentiation from Tencent. Across the US, EU, and Asia-Pacific, the pattern is consistent: governments and major corporates are treating domestic semiconductor and compute capacity as strategic infrastructure rather than commercial procurement. The practical implication for infrastructure professionals is that national programmes will increasingly compete with commercial operators for the same constrained inputs — TSMC capacity, ASML tools, power grid connections, and specialised engineering talent — creating allocation conflicts that will not be resolved by price alone.
On-Device AI Model Distribution Is Creating a New Class of Untracked Compute Demand
Google Chrome's silent download of a 4GB Gemini Nano weights file to user devices — flagged by researchers as potentially violating EU law and collectively wasting thousands of kilowatts of energy across the installed base — signals an emerging infrastructure dynamic that sits below enterprise monitoring thresholds. As browser vendors, OS providers, and application developers push inference workloads to edge devices, the aggregate compute, storage, and energy consumption is distributed across hundreds of millions of endpoints rather than centralised in metered data centres. This makes the true energy and storage footprint of AI deployment systematically undercounted in standard infrastructure analyses. For enterprise IT and sustainability officers, the implication is that AI energy accounting frameworks built around data centre consumption will increasingly miss a material and growing share of total AI infrastructure cost.
Explore Other Categories
Read detailed analysis in other strategic domains