Compute & Infrastructure
Top Line
Arm begins selling its own 136-core AGI CPU silicon for the first time in its history, targeting $15 billion in annual chip sales within five years with Meta as its first deployment customer — a fundamental shift from pure IP licensing to direct competition with its own customers.
Meta's 5-gigawatt Hyperion data centre in Louisiana will cover a footprint comparable to Manhattan when complete, with its first 2-gigawatt phase coming online soon — the largest single data centre project ever undertaken, reflecting AI training's exponentially growing power demands.
US senators demand suspension of all Nvidia AI chip export licenses to China, contradicting CEO Jensen Huang's assurances that no diversion is occurring — intensifying scrutiny of the semiconductor supply chain's most critical chokepoint as Huawei launches competing Atlas 350 accelerators claiming 2.8x H20 performance.
Hyperscale data centres are transitioning from AC to DC power distribution to reduce conversion losses as AI chips draw unprecedented power densities, while Microsoft and Nvidia launch partnership to accelerate nuclear plant permitting using AI simulation tools — infrastructure players racing to solve energy constraints before they become buildout bottlenecks.
Key Developments
Arm abandons pure-play IP licensing model to compete directly in silicon
Arm unveiled its first in-house manufactured chip, the AGI CPU — a 136-core data centre processor targeting AI inference workloads — and announced Meta as its lead deployment partner with installations planned for later this year. The company projects this new silicon business will generate $15 billion in annual revenue within five years, CEO Rene Haas told Bloomberg. The move fundamentally alters Arm's business model from pure IP licensing to direct hardware sales, positioning it to compete with the very companies that license its designs.
The strategic shift reflects Arm's calculation that licensing alone cannot capture enough value from the AI infrastructure buildout, even as its architectures gain ground in data centres. Haas described the new product category as expanding Arm's total addressable market to $1 trillion by decade's end, though The Register notes the CEO offered few details on how this valuation was calculated. The AGI CPU targets AI agent workloads specifically, distinguishing it from general-purpose server chips, according to ServeTheHome.
Meta's Hyperion project sets new scale ceiling for single data centre facilities
Meta announced its 5-gigawatt Hyperion data centre in Louisiana in June 2025, with IEEE Spectrum reporting the facility will cover a footprint comparable to Manhattan when complete. The first phase — a 2-gigawatt buildout — is nearing completion and represents the largest single data centre ever constructed. The project's power demand alone exceeds the total electricity consumption of many small nations and requires dedicated transmission infrastructure to connect to the regional grid.
The sheer scale reveals the resource intensity of frontier AI model training and inference at Meta's projected usage levels. Hyperion's power requirements approach those of multiple traditional hyperscale campuses combined, forcing fundamental rethinking of data centre design, cooling systems, and grid interconnection strategies. The project also highlights geographic constraints — Louisiana was selected partly for grid capacity and lower energy costs, but buildout timelines remain years long even with prioritised permitting.
US-China semiconductor competition intensifies as export controls face evasion claims
Senators Elizabeth Warren and Jim Banks sent a bipartisan letter to Commerce Secretary Howard Lutnick demanding suspension of all active Nvidia AI chip export licenses to China, stating that Jensen Huang's congressional testimony denying diversion was "contradicted by reporting available," according to Tom's Hardware. The letter targets not only direct China exports but intermediary jurisdictions being used to circumvent controls.
Simultaneously, Huawei launched its Atlas 350 AI accelerator using the Ascend 950PR chip with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM, claiming 2.8x the performance of Nvidia's export-restricted H20 chips, Tom's Hardware reported. Separately, Alibaba revealed a RISC-V server chip optimised for China's top AI models, though The Register notes it appears years behind Western performance levels. These developments show China's semiconductor ecosystem continuing to advance despite export restrictions, though with gaps remaining in leading-edge capabilities.
Data centre infrastructure transitions to DC power as AI chips strain legacy systems
Major power infrastructure providers including Delta, Vertiv, and Eaton unveiled new DC-based power delivery systems at Nvidia's GTC conference, with IEEE Spectrum reporting that hyperscale facilities are replacing inefficient AC-to-DC conversion stages with direct DC distribution at 800 VDC. The shift eliminates multiple conversion losses that become significant at AI workload power densities. A whitepaper from infrastructure vendors outlines five principles for 800 VDC adoption specifically for AI data centres, suggesting the transition is moving from pilot to production.
Separately, Microsoft and Nvidia announced a partnership to accelerate nuclear power plant permitting and construction using AI simulation tools and digital twins on Nvidia's Omniverse platform, Tom's Hardware reported. The collaboration aims to compress nuclear project timelines that historically stretch over a decade — critical as data centre operators confront power availability as the primary constraint on AI infrastructure expansion.
SK Hynix pursues US listing to capitalise on AI memory demand surge
SK Hynix announced plans for an American Depositary Receipt listing in the US this year, positioning it as potentially one of the largest foreign company debuts ever, Bloomberg reported. The move aims to raise capital to expand high-bandwidth memory production capacity as AI accelerators consume HBM at unprecedented volumes. SK Hynix is currently one of only three major HBM suppliers globally alongside Samsung and Micron, giving it pricing power but also requiring massive capital expenditure to keep pace with demand.
The listing reflects both the strategic importance of memory supply chains for AI infrastructure and SK Hynix's calculation that US capital markets offer better valuations for AI-adjacent businesses than South Korean exchanges. HBM remains a critical bottleneck — Nvidia's latest systems require up to 112GB per GPU, and chipmakers are racing to qualify next-generation HBM4 for 2027 deployment.
Signals & Trends
Regional compute sovereignty emerging as second-tier strategic priority for middle powers
Deloitte warned that Australia faces a "sliding doors moment" to establish itself as a regional AI infrastructure hub, with Bloomberg reporting the window for action is narrowing as other Asia-Pacific nations accelerate investments. Separately, a developer proposed an AI data centre in Missoula County, Montana, according to Data Center Dynamics, reflecting how even smaller jurisdictions are positioning for AI infrastructure buildout. The pattern suggests compute capacity is becoming a sovereign capability that mid-tier economies view as strategically essential, similar to energy or telecommunications infrastructure in previous eras. Countries without domestic hyperscale capacity risk becoming entirely dependent on US or Chinese cloud providers for AI capabilities.
Nvidia's rack-scale systems strategy squeezing server integrator margins
Pricing for Nvidia's Vera Rubin NVL72 rack-scale systems has reached $5 million to $8.8 million per rack, but ODM margins are declining even as total system prices rise, Tom's Hardware reported. The trend indicates Nvidia is capturing more value by moving up the stack from GPUs to complete rack-scale systems, compressing the traditional server integrator business model. As AI infrastructure becomes increasingly turnkey at the rack level, the role of Dell, HPE, and Super Micro diminishes to final assembly and deployment rather than system design. This vertical integration by Nvidia mirrors cloud providers' moves to custom silicon — both trends reduce the number of independent players capturing margin in the AI infrastructure stack.
DC power adoption accelerating beyond hyperscale into broader data centre market
The publication of vendor whitepapers on 800 VDC deployment principles and announcements from multiple power infrastructure providers at the same conference suggest DC power distribution is transitioning from experimental deployments at Google and Meta to a standard architecture option for AI-focused facilities. This matters because infrastructure transitions typically lag workload shifts by years — the fact that power delivery vendors are productising DC solutions indicates they expect AI workload concentration to persist rather than revert to diversified compute patterns. If 800 VDC becomes standard, it creates a bifurcated data centre market where AI facilities require fundamentally different electrical infrastructure than traditional enterprise workloads, potentially limiting facility fungibility and increasing stranded asset risk for operators slow to adapt.
Explore Other Categories
Read detailed analysis in other strategic domains