Compute & Infrastructure
Top Line
Intel has joined Elon Musk's Terafab project to co-design and manufacture AI chips for Tesla, SpaceX, and xAI — a strategic shift giving Intel a captive high-volume customer to anchor its struggling foundry operations while reducing Musk's dependence on TSMC and potentially NVIDIA architectures.
Broadcom disclosed it will supply Anthropic with 3.5 gigawatts of Google TPU capacity starting in 2027, reflecting massive confirmed expansion in non-NVIDIA inference infrastructure as Anthropic's revenue run rate surpasses $30 billion annually.
Bain Capital's Bridge Data Centres has removed a Southeast Asian company from its Malaysian computing hub following US investigation into suspected smuggling of NVIDIA chips, highlighting enforcement tightening around export controls and supply chain compliance risks for data centre operators.
The UALink Consortium released version 2.0 specifications for GPU interconnect technology before any version 1.0 silicon has shipped, illustrating the long lead times and coordination challenges facing efforts to build alternatives to NVIDIA's proprietary NVLink networking stack.
Key Developments
Intel anchors foundry strategy with Musk's Terafab vertical integration play
Intel announced it is joining Elon Musk's Terafab project in Austin, Texas, to help design and manufacture AI chips for Tesla, SpaceX, and xAI, according to Bloomberg and The Verge. The project aims to consolidate the entire chip-making lifecycle under one roof, with Intel providing both design expertise and foundry capacity. Intel's stock rose on the announcement, and the company's chairman Lip-Bu Tan stated Musk is expected to reimagine the semiconductor industry through this initiative, as reported by Tom's Hardware. Separately, Tom's Hardware reports Intel is in active talks with Google and Amazon to provide advanced chip packaging services for their custom AI ASICs, with major customers potentially accessing EMIB-T technology later this year.
The arrangement addresses critical vulnerabilities for both parties: Intel gains a high-volume anchor customer to justify its struggling foundry investments at a time when it has lost market share to TSMC and Samsung, while Musk secures domestic manufacturing capacity and reduces exposure to geopolitical risks in Taiwan. The vertical integration model — controlling design, fabrication, and packaging — represents a potential challenge to the dominant fabless-foundry separation that has characterised the industry for two decades, particularly if it enables faster iteration cycles or cost advantages at scale.
Broadcom-Anthropic deal confirms multi-gigawatt scale of non-NVIDIA inference buildout
Broadcom disclosed in a securities filing that it will supply Anthropic with approximately 3.5 gigawatts of Google TPU capacity starting in 2027, according to Tom's Hardware. Anthropic stated its annual revenue run rate has surpassed $30 billion, indicating rapid scaling of Claude's commercial adoption. This represents a significant expansion of the existing Broadcom-Google-Anthropic partnership, where Broadcom designs custom ASICs manufactured by TSMC and deployed in Google Cloud infrastructure specifically for Anthropic's use.
The 3.5 gigawatt figure provides rare concrete visibility into the power demands of frontier model inference at scale. For context, this is equivalent to multiple large data centre campuses and represents one of the largest single-customer commitments disclosed to date. The deal structure — where Broadcom acts as systems integrator, Google provides cloud infrastructure and TPU architecture, and Anthropic commits to multi-year capacity — demonstrates the emergence of alternative supply chains outside NVIDIA's ecosystem, particularly for inference workloads where custom ASICs can offer better performance-per-watt and total cost of ownership than general-purpose GPUs.
US export control enforcement targets data centre operators over chip smuggling
Bain Capital's Bridge Data Centres removed a Southeast Asian company from its Malaysian computing hub following a US investigation into suspected smuggling of NVIDIA chips, Bloomberg reports. The action demonstrates that US authorities are extending enforcement beyond chip manufacturers and distributors to colocation providers and data centre operators who may host customers engaged in export control violations.
Malaysia has emerged as a significant location for AI infrastructure buildout, positioned between China and Western markets. The enforcement action signals that data centre operators face compliance risks when accepting customers whose chip procurement practices are opaque or potentially circumvent export restrictions. This creates due diligence burdens for infrastructure providers and may slow deployments in jurisdictions where end-customer verification is difficult or where shell companies can easily lease capacity.
UALink Consortium advances specifications but silicon delivery remains distant
The UALink Consortium, a group including AMD, Intel, Google, Microsoft, and other companies working to develop an alternative to NVIDIA's NVLink and NVSwitch interconnect technologies, has released version 2.0 specifications despite not yet shipping any version 1.0 silicon, The Register reports. The consortium is splitting work on physical layer and protocol specifications to accelerate development, but production chips remain months away at minimum.
The progression to version 2.0 before hardware availability illustrates the complexity of building interoperable, high-bandwidth GPU interconnects and the coordination challenges when multiple vendors must align on standards. NVIDIA's vertically integrated approach — controlling GPU architecture, interconnect design, and switch hardware — has allowed it to iterate rapidly and maintain a multi-year lead in networking performance. UALink members face the challenge of achieving competitive bandwidth and latency while ensuring interoperability across different vendors' accelerators, a significantly harder technical and commercial problem.
Signals & Trends
Sovereign chip manufacturing ambitions are reshaping foundry customer dynamics
Intel's participation in Terafab, combined with its outreach to Google and Amazon for packaging services, reflects a broader pattern where governments and strategically sensitive customers are prioritising domestic or allied manufacturing capacity over pure cost optimisation. This is particularly evident in AI infrastructure, where training and inference workloads involve proprietary model weights and architectures that customers increasingly view as national security or competitive assets. South Korea's record current account surplus driven by semiconductor exports, as reported by Bloomberg, underscores the economic significance of chip production and the geopolitical incentives for nations to secure domestic capacity. The willingness of major customers to accept potentially higher costs or lower yields in exchange for supply chain control represents a structural shift away from the efficiency-maximising fabless-foundry model that dominated 2000-2020.
Energy constraints are accelerating architectural innovation toward efficiency over raw scale
Multiple developments point toward energy availability becoming the binding constraint on AI infrastructure expansion. IEEE Spectrum reports on decentralised training approaches aimed at reducing data centre energy concentration, while Broadcom's 3.5 gigawatt Anthropic deal and the industry focus on co-packaged optics for data centres, as covered by Semiconductor Engineering, reflect attempts to improve performance per watt rather than simply adding more capacity. Co-packaged optics promise significant power savings by integrating optical transceivers directly with processors, reducing energy losses in data movement. This shift is visible in custom ASIC investments by Anthropic, Google, and others — inference workloads are moving toward specialised silicon that trades flexibility for efficiency. The constraint is not chip availability but the ability to power and cool them at scale, which is driving architecture choices.
Advanced packaging is emerging as the critical bottleneck and differentiation point
Intel's packaging discussions with Google and Amazon, combined with Semiconductor Engineering's coverage of process control challenges in MEMS and co-packaged optics, and another article on new testing requirements for AI accelerators, indicate that advanced packaging — chiplets, 3D stacking, hybrid bonding, and heterogeneous integration — is becoming as strategically important as leading-edge lithography. TSMC's dominance in packaging (CoWoS capacity) has been as significant as its 3nm leadership in securing AI chip business. Intel's EMIB-T technology represents an attempt to compete on packaging independently of process node leadership. The ability to integrate HBM memory, optical interfaces, and custom logic tiles into a single package with acceptable yield and thermal management is now a first-order determinant of AI chip competitiveness, not merely a back-end manufacturing step.
Explore Other Categories
Read detailed analysis in other strategic domains