Back to Daily Brief

Compute & Infrastructure

19 sources analyzed to give you today's brief

Top Line

Half of planned US data centre builds have been delayed or cancelled due to power infrastructure shortages and supply chain constraints from China, threatening the $650 billion AI buildout planned for 2026.

Memory will consume 30% of hyperscaler AI data centre spending in 2026 — a fourfold increase since 2023 — with Nvidia securing preferential supply terms below standard market rates, further entrenching its ecosystem advantage.

Arm processors are projected to power 90% of AI servers using custom silicon by 2029 as hyperscalers prioritise efficiency and control over general-purpose x86 architectures.

Iran claims to have struck Oracle and Amazon data centres in Dubai and Bahrain, marking a new phase of geopolitical risk targeting physical AI infrastructure in the Middle East.

Key Developments

US data centre buildout hitting physical infrastructure limits

Half of planned US data centre construction projects have been delayed or cancelled, according to Tom's Hardware, as power infrastructure components become the binding constraint on AI capacity expansion. Cloud giants plan to deploy $650 billion in AI infrastructure in 2026, but availability of transformers, switchgear, and other electrical distribution equipment sourced primarily from China has created a bottleneck that software optimisation cannot solve. The delays reveal a fundamental mismatch between the speed of capital allocation and the multi-year lead times required to manufacture and deploy grid-scale electrical infrastructure.

This constraint is structural rather than temporary. Electrical equipment manufacturers operate on different timelines than cloud providers, and domestic alternatives to Chinese suppliers remain limited. The delays will force hyperscalers to prioritise existing facilities for expansion and potentially shift more workload to regions with available grid capacity, creating geographic imbalances in AI compute distribution.

Why it matters

Physical infrastructure is now the primary constraint on AI scaling, not chip supply or capital availability — a fundamental shift in bottleneck analysis for the industry.

What to watch

Whether hyperscalers begin acquiring electrical equipment manufacturers or signing long-term offtake agreements to secure future supply, similar to how they handle chip capacity.

Memory costs reshape data centre economics and entrench Nvidia's position

Memory will account for 30% of hyperscaler AI data centre spending in calendar year 2026, up from roughly 7.5% in 2023, according to SemiAnalysis estimates reported by Tom's Hardware. The quadrupling of memory's share reflects both the explosive growth in HBM demand for AI accelerators and Nvidia's ability to negotiate preferential pricing well below standard market rates. This pricing advantage compounds Nvidia's architectural lead: customers not only get superior performance but also better total cost of ownership through supply chain leverage competitors cannot match.

The memory concentration creates a two-tier market. Nvidia's preferred access to HBM capacity at below-market rates means alternative accelerator designs face higher effective costs even when claiming comparable silicon performance. Memory suppliers prioritise Nvidia volume, reinforcing a cycle where competing architectures struggle to achieve cost parity at deployment scale.

Why it matters

Memory supply and pricing are now as strategically significant as chip architecture in determining AI infrastructure economics, with implications for which vendors can compete at hyperscale.

What to watch

Whether memory suppliers begin offering similar preferential terms to AMD, custom silicon designers, or cloud providers building their own accelerators to reduce dependence on Nvidia.

Arm dominance in custom AI server silicon solidifies by 2029

Arm-based processors will power 90% of AI servers using custom silicon by 2029, according to industry analysis cited by Tom's Hardware. Hyperscalers building in-house CPU designs are converging on Arm architectures for efficiency and control, marginalising both x86 and RISC-V in the AI data centre segment. Amazon's Graviton, Google's Axion, and Microsoft's Cobalt all use Arm instruction sets, establishing a de facto standard for cloud-native AI workloads. The trend reflects a strategic shift where hyperscalers prioritise workload-specific optimisation over general-purpose compatibility.

Intel's data centre chief has publicly disputed claims that agentic AI requires new CPU architectures, as noted by The Register, arguing existing cores meet agent requirements. This disagreement highlights the architectural debate, but deployment decisions by AWS, Google, and Microsoft carry more weight than vendor positioning. The Arm shift in custom silicon is already underway, driven by total cost of ownership rather than theoretical performance debates.

Why it matters

The consolidation around Arm in custom AI server chips reduces architectural diversity and increases dependence on a single instruction set architecture controlled by a UK-based, SoftBank-owned company.

What to watch

Whether geopolitical tensions or supply chain concerns prompt any hyperscaler to hedge with RISC-V implementations, or if Arm's licensing model proves flexible enough to maintain dominance.

Geopolitical attacks target physical data centre infrastructure in Middle East

Iran's Islamic Revolutionary Guard Corps claimed responsibility for attacks on an Oracle data centre in Dubai and an Amazon facility in Bahrain, according to Tom's Hardware. The country has explicitly threatened attacks against Nvidia, Intel, and other technology companies operating in the region. Whether the claims are accurate or inflated for propaganda purposes, the targeting of data centre infrastructure represents an evolution in state-level conflict beyond traditional cyber operations.

Physical attacks on data centres create a new risk category for infrastructure planning. The Middle East has emerged as a significant AI compute hub due to available power and proximity to markets, but this geographic diversification now carries kinetic risk. Separately, Super Micro Computer co-founder Yih-Shyan Liaw pleaded not guilty to charges of smuggling billions of dollars in Nvidia servers to China, as reported by Tom's Hardware, illustrating continued pressure on AI hardware export controls.

Why it matters

Data centre infrastructure is becoming a target for state-level conflict, adding physical security and geopolitical stability to site selection criteria alongside power, cooling, and connectivity.

What to watch

Whether hyperscalers accelerate geographic diversification away from geopolitically contested regions or invest in hardened facilities designed to withstand kinetic attacks.

Nvidia-Marvell $2 billion deal extends NVLink ecosystem control

Nvidia's $2 billion investment in Marvell, reported by The Next Platform, extends beyond NVLink Fusion switches to encompass broader infrastructure control. Marvell supplies custom silicon for hyperscaler networking and storage, making this partnership strategic for maintaining Nvidia's end-to-end influence over AI data centre architecture. The deal ensures NVLink compatibility across multiple infrastructure layers while potentially limiting Marvell's ability to support competing interconnect standards with equal priority.

This follows Nvidia's pattern of using financial relationships and technology licensing to shape the ecosystem around its architectures. By investing in infrastructure component suppliers, Nvidia reduces the likelihood of those companies prioritising alternative interconnect standards or competitive accelerator designs.

Why it matters

Nvidia is using capital deployment to influence infrastructure suppliers, extending architectural lock-in beyond GPUs into networking and system-level components.

What to watch

Whether other accelerator vendors or hyperscalers respond by acquiring or funding alternative infrastructure component suppliers to break Nvidia's ecosystem control.

Signals & Trends

Co-packaged optics becoming an architectural commitment with decade-long implications

Research from the University of Wisconsin and MIT, cited by Semiconductor Engineering, argues that co-packaged optics decisions are not simply component choices but architectural commitments affecting data centre design for a decade or more. AI accelerator workloads are forcing fundamental rethinking of optical interconnect architectures, but co-packaged optics lock facilities into specific thermal, power, and upgrade paths that are difficult to reverse. The paper suggests the industry may be solving the wrong problems by optimising individual components rather than system-level architectures. This matters because data centre operators making co-packaged optics decisions in 2026 are constraining their flexibility through 2036, yet the technology's long-term viability in rapidly evolving AI workloads remains uncertain. The signal is that infrastructure choices are hardening before workload patterns have stabilised.

Industrial and automotive AI driving Raspberry Pi toward semiconductor business model

Raspberry Pi reported that chip shipments now overtake board and module sales as industrial demand grows, according to The Register, with particularly strong growth in US and Chinese markets. This shift from hobbyist boards to semiconductor supply for industrial AI applications reveals a broader pattern: edge AI deployment is creating demand for lower-cost, purpose-built silicon distinct from data centre accelerators. Companies originally focused on education and hobbyist markets are finding their chip designs more valuable than their finished products. The trend suggests edge AI silicon may not consolidate around the same vendors dominating cloud infrastructure, creating a parallel supply chain with different chokepoints and geographic dependencies. Watch whether other embedded computing companies follow this trajectory toward becoming semiconductor suppliers rather than system integrators.

Explore Other Categories

Read detailed analysis in other strategic domains