Geopolitics & Sovereign Positioning
Top Line
The US Department of Defense designated Anthropic a supply-chain risk after failed negotiations over model access for military applications, marking the first time this designation has been applied to a domestic AI company rather than adversary-nation firms like Huawei.
The US Commerce Department has drafted regulations requiring permits for AI chip shipments anywhere in the world, extending export control reach beyond current country-specific restrictions in a bid to maintain global leverage over advanced computing infrastructure.
Oracle and OpenAI scrapped plans to expand their flagship Texas AI data center after negotiations collapsed over financing and OpenAI's changing needs, with Meta now in talks to take the capacity—illustrating how infrastructure bottlenecks are reshaping competitive dynamics among frontier AI developers.
China announced it will harness AI to address 12.7 million university graduates entering the labor market this year, signaling Beijing's intent to position AI as a solution to domestic employment pressure rather than viewing automation as a displacement threat.
Key Developments
Pentagon designates Anthropic supply-chain risk in unprecedented domestic AI company action
The Department of Defense officially designated Anthropic a supply-chain risk after negotiations failed over how much control the military would have over its AI models, including use in autonomous weapons and mass domestic surveillance, according to Bloomberg and TechCrunch. The classification—previously reserved for adversary-nation companies like Huawei—puts Anthropic at risk of losing access to a wide range of US government business beyond the collapsed $200 million Pentagon contract. The DoD turned to OpenAI as an alternative provider, which accepted terms Anthropic declined, though OpenAI subsequently saw ChatGPT uninstalls surge 295 percent. The Financial Times reports that draft rules now mandate civilian government contracts make AI models available for any lawful use, formalising the access requirements that triggered the Anthropic dispute.
The designation creates a precedent for government leverage over domestic AI companies through procurement requirements that effectively compel model access for security applications. Heidy Khlaaf, chief AI scientist at AI Now Institute, told AI Now that existing safety guardrails for generative AI in high-stakes decisions are deeply lacking and easily compromised, making it highly doubtful systems inadequate for benign cases could be secured for complex military and surveillance operations. The episode reveals a fundamental tension: frontier AI companies positioned as safety-focused face pressure to provide government access that may conflict with stated principles, while competitors willing to accept those terms gain procurement advantage.
US drafts global permit system for AI chip exports, extending control beyond current restrictions
The US Commerce Department has drafted regulations that would restrict AI chip shipments to anywhere in the world without American approval, according to Bloomberg. The proposed rules would require permits for Nvidia and AMD AI chip sales globally, extending beyond current country-specific export controls that primarily target China and other adversary nations. This represents a shift from selective restrictions to universal permit requirements that would give the US government veto power over advanced computing sales to any destination, including allied countries.
The move suggests the US is attempting to maintain leverage over global AI infrastructure development by controlling access to advanced chips regardless of destination. Current export controls have already created pressure for alternative chip development, with South Korean startup Rebellions positioning itself to compete with Nvidia and AMD in AI semiconductors, according to Bloomberg. The effectiveness of expanded controls depends on whether they prevent adversaries from acquiring advanced computing or accelerate development of non-US alternatives that eventually erode American chip dominance.
Oracle-OpenAI infrastructure deal collapses as Meta steps in, exposing data center capacity as strategic chokepoint
Oracle and OpenAI terminated plans to expand their flagship AI data center in Abilene, Texas after negotiations dragged over financing and OpenAI's changing needs, according to Bloomberg and the Financial Times. The collapse created an opening for Meta to consider leasing the planned expansion site from developer Crusoe, with Nvidia facilitating Meta's discussions with the data center developer. Oracle is simultaneously planning thousands of job cuts as it manages a cash crunch driven by massive AI data center expansion spending, according to Bloomberg. The episode reveals how infrastructure bottlenecks are reshaping competitive positioning among frontier AI developers, with access to power and data center capacity becoming as strategically important as model capabilities.
The conflict in Iran is underscoring risks of building data centers in the Gulf region, with Bloomberg reporting that such facilities are inevitable targets in conflict, according to Sam Winter-Levy of the Carnegie Endowment for International Peace. This adds geopolitical risk considerations to data center location decisions that have primarily focused on power availability and cost. South Korea's HD Hyundai Electric is accelerating US expansion betting on surging demand for transformers and switchgear driven by AI power consumption, according to Bloomberg, illustrating how infrastructure bottlenecks create opportunities throughout the supply chain.
China positions AI as employment solution for 12.7 million graduates, contrasting with Western displacement concerns
China announced it will harness artificial intelligence to create jobs as 12.7 million university graduates—exceeding Belgium's population—prepare to enter the labour market this year, according to Bloomberg. The framing positions AI as a solution to employment pressure rather than viewing automation as a displacement threat, contrasting sharply with Western discourse focused on AI replacing workers. The announcement suggests Beijing views AI deployment as compatible with maintaining employment levels, potentially through creation of AI-adjacent roles, platform work, or government-supported positions that incorporate AI tools rather than being displaced by them.
This represents a distinct strategic narrative from Western economies where AI is increasingly linked to job cuts, as Oracle's thousands of planned layoffs driven by AI spending illustrate, according to Bloomberg. However, Martha Gimble of Yale Budget Lab notes that data showing AI actually replacing human workers hasn't materialised yet, suggesting current layoffs may be using AI as justification rather than evidence of actual substitution. China's approach may reflect state capacity to direct AI deployment toward employment-compatible applications or acceptance of lower productivity AI uses if they maintain social stability through employment.
Guardian editorial identifies Iran conflict as evidence AI warfare paradigm shift has begun
The Guardian's editorial board assessed that the intensified use of artificial intelligence in the Iran conflict demonstrates a paradigm shift in warfare has already begun, not remained theoretical, according to The Guardian. The editorial argues that speed of technological development and geopolitical turbulence are collapsing distinctions between theoretical arguments and real-world consequences, echoing UN Secretary-General António Guterres's warning that future development will only accelerate. Tech Policy Press convened experts on technology policy and security to identify key questions as the situation unfolds, though specific details about AI system deployment remain limited.
The significance lies not in confirmed technical details of AI use, which remain sparse in available reporting, but in the assessment by informed observers that operational AI deployment in conflict has crossed a threshold from experimental to routine. This suggests existing international frameworks for weapons control and rules of engagement are being outpaced by deployment reality, creating governance gaps that multilateral institutions have not closed despite years of discussions about autonomous weapons systems.
Signals & Trends
Domestic AI companies face supply-chain designations previously reserved for adversary nations
The Pentagon's application of supply-chain risk designation to Anthropic—a tool historically used against Chinese firms like Huawei—signals willingness to use adversarial frameworks against domestic companies that resist government access requirements. This blurs distinctions between security measures targeting foreign threats versus coercive tools to compel domestic compliance with military and intelligence priorities. The precedent suggests frontier AI companies cannot rely on being American to avoid designations that effectively bar them from government business, fundamentally altering the risk calculus of resisting security agency demands regardless of safety concerns.
Infrastructure capacity emerges as independent constraint on AI geopolitical competition
The Oracle-OpenAI infrastructure collapse, Oracle's cash crunch from data center spending, and Middle East conflict raising data center targeting risks collectively indicate that power, cooling, and physical security constraints are becoming binding factors in AI competition independent of algorithm quality. Countries and companies that secure long-term infrastructure capacity may gain sustained advantage over competitors with superior models but inadequate deployment infrastructure. This shifts strategic emphasis toward supply chain resilience for transformers, switchgear, and power generation rather than just semiconductor access, potentially advantaging nations with domestic manufacturing capacity for energy infrastructure over those focused purely on chip development.
Divergent state strategies emerge on AI labour market management
China's framing of AI as employment solution for millions of graduates contrasts sharply with Western companies using AI as justification for layoffs, even as actual evidence of AI-driven displacement remains absent according to academic researchers. This suggests states are adopting fundamentally different approaches: China directing AI toward employment-compatible applications to maintain stability, versus Western economies accepting or encouraging substitution narratives that may facilitate labour cost reduction regardless of technical reality. The divergence may produce distinct AI deployment patterns globally, with state capacity to direct or constrain automation determining whether AI capabilities get used to displace workers or augment them, creating different competitive dynamics in labour-intensive sectors across geopolitical blocs.
Explore Other Categories
Read detailed analysis in other strategic domains