Capital & Industrial Strategy
Top Line
Anthropic's revenue run rate surged from $9 billion in December 2025 to over $30 billion now, driven by a multi-hundred-billion-dollar infrastructure agreement with Google and Broadcom for custom TPU chips and computing capacity — signalling that frontier model builders are locking in vertical supply chains to sustain hyper-growth.
Samsung posted an eight-fold profit increase driven by AI memory chip sales, while Nvidia-backed data center builder Firmus raised $505 million in a Coatue-led round — infrastructure capital continues to flow despite Middle East conflict disrupting broader markets, reinforcing that AI is decoupling from general tech cycles.
Nvidia's acquisition of SchedMD, the company behind Slurm workload management software widely used in HPC and AI clusters, has triggered concern among AI specialists about potential software lock-in — a signal that compute providers are moving aggressively to control the full stack and that open access to critical orchestration tools may narrow.
OpenAI, Anthropic, and Google have begun coordinating to prevent Chinese competitors from distilling their models — the first evidence of rivals cooperating on IP protection suggests US frontier labs view model theft as a greater threat than domestic competition.
Key Developments
Anthropic locks in vertical infrastructure with Google-Broadcom deal worth hundreds of billions
Anthropic's revenue run rate crossed $30 billion — more than tripling since the end of 2025 — and the company confirmed a long-term agreement with Google and Broadcom to supply custom TPU chips and computing capacity worth hundreds of billions of dollars, according to reports from Bloomberg, the Financial Times, The Wall Street Journal, and CNBC. Broadcom will design future generations of Google's Tensor Processing Units, while Anthropic gains assured access to the resulting infrastructure. Reuters confirmed the multi-year chip development agreement between Broadcom and Google.
The structure of the deal — vertical integration from chip design through infrastructure provisioning to end-customer access — is notable. Anthropic is effectively pre-committing capacity years in advance, reducing its dependence on spot market availability and locking out competitors from the same supply. Google, meanwhile, deepens its role as both infrastructure provider and indirect investor (having previously put billions into Anthropic). Broadcom gains a strategic foothold in custom AI silicon, moving beyond its traditional networking business.
AI infrastructure capital proves resilient amid geopolitical shocks
Samsung Electronics reported an eight-fold year-over-year profit increase, beating analyst estimates, driven by robust AI memory chip sales despite markets disrupted by Middle East conflict, according to Bloomberg. Separately, data center builder Firmus Technologies raised $505 million in a round led by Coatue Management, with backing from Nvidia, as part of a global push to finance AI infrastructure, per another Bloomberg report.
The Samsung result is significant because memory — HBM in particular — is a supply-constrained input for AI accelerators. Strong sales indicate that hyperscalers and AI labs are still ordering aggressively, undeterred by macro uncertainty. The Firmus round, meanwhile, shows that infrastructure builders can still attract growth-stage capital even as broader venture markets tighten. Nvidia's participation signals strategic interest in ensuring data center capacity keeps pace with its chip shipments.
Nvidia's SchedMD acquisition raises concerns about software stack control
Nvidia's acquisition of SchedMD, the company behind the widely used Slurm workload manager for high-performance computing and AI clusters, has sparked concern among AI specialists about potential restrictions on software access, according to Reuters. Slurm is critical infrastructure for scheduling jobs across GPU clusters, and many academic and enterprise users fear Nvidia may use control over the software to steer customers toward its hardware or limit interoperability with competing chips.
This follows a pattern of vertical integration by compute providers. Nvidia already dominates GPU hardware and has built a software moat with CUDA. Acquiring the job scheduler layer gives it influence over how workloads are allocated and optimised — potentially making it harder for AMD, Intel, or custom silicon providers to compete on equal footing in multi-vendor clusters.
US frontier labs coordinate for first time to combat Chinese model distillation
OpenAI, Anthropic, and Google have begun working together to prevent Chinese competitors from extracting results from their models to gain an edge in the global AI race, according to Bloomberg. The collaboration marks the first time these rivals have coordinated on defensive measures against intellectual property theft. Chinese labs have been using techniques such as distillation — querying a frontier model repeatedly to train a smaller, cheaper copy — to replicate capabilities without incurring the full training cost.
The move suggests that US labs now view IP leakage as a greater threat than domestic competition. It also indicates that API-based access, once seen as a revenue stream, is increasingly treated as a security surface. The coordination likely involves shared rate-limiting strategies, query fingerprinting, and potentially coordinated refusals to serve certain customers or regions.
Signals & Trends
Fintech AI investment is contracting as VCs demand proof of margin improvement
Fintech investors are becoming more selective about AI investments, according to Axios. The shift reflects broader caution about AI business models outside core infrastructure plays. While frontier labs and chip companies are attracting massive capital, application-layer startups — particularly in fintech — are facing higher bars to prove unit economics. This divergence suggests capital is increasingly concentrated in picks-and-shovels infrastructure rather than end-user applications, where monetisation remains uncertain. The trend indicates that 2026 may be a shakeout year for AI application startups that have not yet demonstrated clear paths to profitability.
Ex-OpenAI employees are launching specialized AI venture funds
Zero Shot, a new venture fund with deep ties to OpenAI, is aiming to raise $100 million and has already written checks, according to TechCrunch. The fund is staffed by OpenAI alumni and represents a pattern of frontier lab employees spinning out to invest in the ecosystem they helped build. This creates a secondary capital network around major labs — former insiders with technical credibility and access to talent pipelines are now allocating capital, likely with implicit coordination or soft alignment with their former employers. It also signals that early OpenAI employees, having accrued wealth from equity, are now acting as a new class of AI-native investors. The dynamic may accelerate as more labs mature and employees exit with capital to deploy.
Advanced packaging is emerging as the next bottleneck and competitive battleground
Intel is betting heavily on advanced chip packaging as the next phase of the AI boom, according to Wired. As transistor scaling slows, performance gains are shifting to how chips are assembled — 3D stacking, chiplet integration, and high-bandwidth interconnects. Intel's foundry strategy hinges on offering advanced packaging services to customers, positioning it as a neutral supplier in a market dominated by TSMC. If packaging becomes the primary performance differentiator, companies that control those capabilities — Intel, TSMC, and a handful of suppliers in Japan and South Korea — will capture outsized value. This also means AI hardware competition is moving downstream from design to manufacturing and assembly, favoring vertically integrated players or those with access to cutting-edge fab partners.
Explore Other Categories
Read detailed analysis in other strategic domains