Compute & Infrastructure
Top Line
A Thailand-based company central to that country's national AI strategy is suspected of routing billions of dollars in Nvidia-chip-laden Supermicro servers to Chinese firms including Alibaba, exposing a critical enforcement gap in US export controls and implicating sovereign AI programs as potential smuggling vectors.
Anthropic has confirmed a $1.8 billion compute deal with Akamai, signalling that frontier AI labs are increasingly diversifying beyond hyperscaler dependency and pulling edge/CDN providers into the AI infrastructure stack.
SK Hynix customers are reportedly offering to directly purchase EUV machines and fund new fab lines as HBM memory capacity reaches zero, a level of demand-side desperation that reveals how severely AI training workloads have exhausted advanced memory supply chains.
The number of US jurisdictions banning new data centre construction has reached 69, with four imposing permanent blocks, creating a fragmented regulatory landscape that threatens to constrain AI infrastructure buildout in ways that raw capital cannot easily overcome.
OpenAI and Broadcom are in reported discussions over financing for an $18 billion custom ASIC project, with Microsoft purchase commitments reportedly used as collateral — a structure that would deepen hyperscaler entanglement in frontier model compute even as OpenAI diversifies its infrastructure relationships.
Key Developments
Nvidia Chip Smuggling Via Thailand Exposes Sovereign AI Programs as Export Control Vulnerabilities
US authorities suspect that a company central to Thailand's national AI initiative facilitated the transhipment of Supermicro servers — containing advanced Nvidia GPUs subject to export restrictions — to Chinese end-users including Alibaba, according to Bloomberg. The scale is described as billions of dollars in hardware, making this one of the most significant alleged circumvention operations identified since the expanded October 2023 controls. The use of a sovereign AI programme as a routing mechanism is particularly alarming: it suggests adversarial actors are deliberately co-opting government-endorsed entities to exploit the political sensitivity around disrupting national tech initiatives.
The chokepoint implications are multi-directional. Nvidia faces reputational and regulatory exposure from distribution channel failures it does not directly control. Supermicro — already under prior scrutiny — is again a named vector. TSMC-manufactured silicon ends up in restricted territory regardless of the formal sales chain. The Bureau of Industry and Security will face intensified pressure to impose distributor-level compliance obligations, potentially slowing legitimate chip deployments in Southeast Asia as collateral damage. This incident will also accelerate scrutiny of other Southeast Asian nations — Malaysia, Vietnam, Indonesia — that have signed large AI infrastructure deals with US hyperscalers and chip vendors.
HBM Supply Reaches Breaking Point as SK Hynix Customers Offer to Self-Finance Fab Expansion
SK Hynix customers have moved beyond standard long-term supply agreements and are reportedly offering to directly purchase EUV lithography equipment and co-fund new fab lines, according to Tom's Hardware. This is a structurally significant shift: it means hyperscalers and AI hardware vendors are willing to take on balance-sheet exposure in memory manufacturing — a capital-intensive and technically specialised domain well outside their core competency — because the market cannot deliver supply fast enough. The offers are described as running into the hundreds of millions of dollars per commitment.
The bottleneck is compounded by the ASML dependency: EUV machines have lead times measured in years and ASML's production capacity is itself constrained. Even if customer financing unlocks SK Hynix's willingness to expand, the physical equipment pipeline limits how quickly new HBM capacity can come online. Samsung and Micron face similar constraints. For AI training clusters — where HBM bandwidth is the binding constraint on GPU utilisation — this shortage directly caps the effective deployment rate of new Nvidia Blackwell and successor hardware regardless of chip availability.
Data Centre Ban Count Hits 69 US Jurisdictions, Compressing the Buildout Geography
As of April 2026, 69 US jurisdictions have enacted bans or moratoriums on new data centre construction, with four classified as permanent, according to Tom's Hardware. The acceleration is notable: this number has grown rapidly from earlier counts tracking fewer than 20 active restrictions. The drivers are primarily local: power grid strain, water consumption for cooling, property tax base concerns, and community opposition to industrial-scale facilities in residential or agricultural zones. These are not federal interventions — they are distributed local zoning and utility decisions that collectively produce a fragmented and unpredictable permitting environment.
The geographic compression effect is underappreciated. As bans proliferate in established markets — Northern Virginia, the Pacific Northwest, parts of the Midwest — remaining viable locations command premium pricing, faster utility interconnection queues, and greater political leverage over hyperscaler operators. This creates a structural advantage for states that have proactively positioned themselves as data centre-friendly, including Texas, Georgia, and Ohio, at the cost of accelerating grid stress in those concentrated locations. The Three Mile Island restart — targeting mid-2027 — is a direct response to this dynamic, with nuclear offering a politically defensible power source in a context where new gas generation also faces permitting opposition, as reported by Bloomberg.
Anthropic's $1.8B Akamai Deal and OpenAI's $18B Broadcom ASIC Project Reshape Frontier AI Compute Architecture
Anthropic has confirmed a $1.8 billion multi-year computing deal with Akamai, pulling a CDN and edge infrastructure provider into the frontier AI inference stack, according to Bloomberg. This is a deliberate diversification away from exclusive AWS dependency, which Anthropic has previously relied upon as its primary cloud backer. Akamai's distributed edge footprint offers latency advantages for inference at scale and geographic reach without the concentration risk of a single hyperscaler. The deal size — $1.8 billion — represents a meaningful commit that will reshape Akamai's infrastructure investment priorities.
Simultaneously, OpenAI and Broadcom are in reported discussions to finance an $18 billion custom ASIC project, with the structure reportedly tying Broadcom's investment commitment to Microsoft purchase guarantees, according to Data Centre Dynamics. This remains at the negotiation stage and should be treated as a speculative announcement rather than a confirmed programme. If executed, it would represent one of the largest custom silicon commitments in history and would deepen the Microsoft-OpenAI infrastructure co-dependency even as OpenAI publicly pursues independent data centre capacity. The tension between OpenAI's sovereign compute ambitions and Microsoft's structural leverage as a balance-sheet backstop is a defining dynamic in frontier AI infrastructure for the next three to five years.
CoreWeave Growth Uncertainty and Cerebras IPO Repricing Reveal Bifurcated Investor Confidence in AI Infrastructure
CoreWeave shares fell following Q1 earnings after forward guidance triggered concerns about the sustainability of its hyperscale GPU rental growth trajectory, with CEO Michael Intrator defending the company's data centre buildout pace in a Bloomberg interview. Separately, Cerebras Systems is reported to be raising its IPO price range ahead of its public debut, according to Bloomberg, reflecting strong institutional demand for exposure to alternative AI chip architectures. The contrast is instructive: markets are rewarding differentiated hardware bets while scrutinising capital-intensive GPU rental intermediaries whose value proposition depends on sustained Nvidia supply advantages and hyperscaler pricing dynamics.
CoreWeave's situation highlights a structural vulnerability in the GPU cloud model: it is caught between Nvidia on the supply side — as a dependent customer for H100 and Blackwell allocation — and hyperscalers on the demand side, which are simultaneously customers and competitors building internal capacity. As hyperscalers complete their own GPU cluster buildouts, the addressable market for third-party GPU cloud shrinks toward enterprise and mid-market segments that are less profitable at scale.
Signals & Trends
Customers Co-Financing Supplier Capex Is the New Supply Chain Hedge
SK Hynix customers offering to buy EUV machines and fund fab lines is not an isolated incident — it follows a pattern of AI infrastructure buyers moving up the supply chain to secure capacity. Hyperscalers have funded TSMC advanced packaging expansions via long-term commitments, OpenAI is negotiating to co-finance Broadcom ASIC development, and Anthropic is locking in Akamai capacity at the $1.8 billion level. The common thread is that standard procurement mechanisms — spot market, standard LTAs — are no longer sufficient to guarantee supply when demand growth is structurally faster than manufacturing lead times. The strategic implication is that the frontier between customer and supplier is dissolving in AI infrastructure: the largest buyers are effectively becoming co-investors in the production capacity they need, which creates long-term lock-in, balance sheet exposure, and governance complexity that most technology procurement functions are not designed to manage.
Sovereign AI Programmes Are Becoming Export Control Attack Surfaces
The Thailand case is the clearest example to date of a pattern that US export control enforcers must now systematically account for: state-endorsed national AI initiatives provide political cover, legitimate-seeming end-user credentials, and governmental relationships that complicate enforcement actions. As the US, EU, and allied nations have pushed semiconductor vendors to support partner-nation AI development — through programmes like CHIPS Act bilateral partnerships, the AI Diffusion Rule's Tier 2 country framework, and direct diplomatic engagement — they have inadvertently created a set of entities that are both politically protected and technically capable of handling restricted hardware at scale. The enforcement challenge is that challenging a sovereign AI initiative is diplomatically costly in ways that sanctioning a private company is not. Expect this dynamic to intensify as more nations formalise national AI programmes and as the hardware volumes involved grow.
Air Cooling's Tactical Relevance Persists Despite Liquid Cooling Dominance Narrative
While the infrastructure industry discourse has been dominated by liquid cooling and direct-to-chip thermal management for high-density AI clusters, a meaningful segment of AI inference and edge deployment continues to rely on air cooling — driven by constraints in existing facility infrastructure, edge deployment environments, and cost sensitivities in second-tier markets. The opening of Duos Edge AI's 450kW facility in Corpus Christi and coverage of air-cooled AI systems in trade press suggests that the buildout is not monolithic: there is a large stratum of AI workloads being served by modest, air-cooled facilities that do not appear in hyperscaler capex announcements. This has implications for Mitsubishi Heavy's gas turbine retooling — much of the incremental power demand growth comes not from individual gigawatt campuses but from thousands of smaller distributed facilities whose aggregate load is harder to plan for on the grid.
Explore Other Categories
Read detailed analysis in other strategic domains