Public Policy & Governance
Top Line
The Pentagon's coercion of AI companies has triggered a leadership exodus at OpenAI and Anthropic, with robotics chief Caitlin Kalinowski resigning over deployment of models into classified military networks without adequate safeguards against surveillance of Americans or lethal autonomous weapons.
Iran's targeting of commercial datacentres in the UAE with drone strikes establishes a new precedent in asymmetric warfare, creating immediate sovereignty questions for Gulf states positioning themselves as AI superpowers and forcing recalculation of critical infrastructure protection requirements.
The Pentagon has escalated its dispute with Anthropic by bringing in former Uber executive David Plouffe to negotiate the removal of safety restrictions on Claude AI models for military use, revealing the government's willingness to deploy high-level political operatives to overcome corporate resistance to defense applications.
Key Developments
Pentagon Pressure Forces OpenAI Leadership Exodus Over Military AI Deployment
OpenAI's head of robotics Caitlin Kalinowski resigned on Saturday explicitly citing the company's Pentagon deal to deploy AI models within classified military networks. According to Politico, Kalinowski stated that 'Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.' The resignation follows similar tensions at Anthropic, where the Department of Defense has deployed former Uber executive David Plouffe to negotiate removal of safety restrictions on Claude models, as reported by Bloomberg.
The incidents reveal a pattern of aggressive Pentagon action to secure unrestricted military access to frontier AI systems, even when doing so contradicts companies' stated safety principles. Kalinowski's public statement is notable for its specificity about two bright lines—warrantless surveillance and autonomous weapons—that she believes the deployment crosses. The Pentagon's recruitment of Plouffe, known for his political campaign expertise rather than technical competence, suggests the government views this as a political negotiation rather than a technical compliance matter. Both companies had previously maintained policies against military applications of their most capable models.
Iranian Drone Strikes on UAE Datacentres Establish New Wartime Targeting Doctrine
Iran executed what is believed to be the first deliberate military strike on a commercial datacentre when a Shahed 136 drone hit an Amazon Web Services facility in the United Arab Emirates at 4:30am Sunday, according to The Guardian. Additional strikes targeted facilities in Bahrain. The attacks resulted in devastating fires and forced power supply shutdowns, with attempts to suppress the fires causing further damage to physical infrastructure.
The targeting represents a calculated escalation in asymmetric warfare doctrine, specifically designed to undermine Gulf states' positioning as AI infrastructure hubs. The UAE and Saudi Arabia have invested tens of billions in datacentre capacity and AI compute infrastructure as part of economic diversification strategies. The strikes create an immediate sovereignty dilemma: accepting foreign missile defence systems to protect commercial infrastructure implies an inability to guarantee security for international tech companies, while attempting indigenous defence requires massive capital reallocation from AI investment to military procurement. The attacks also establish a precedent that commercial AI infrastructure is now legitimate military targeting in regional conflicts, fundamentally changing risk calculations for hyperscale operators.
Pentagon Deploys Political Operative to Break Anthropic's Safety Restrictions on Military AI
The Department of Defense has retained David Plouffe, former Uber executive and Obama campaign manager, to negotiate the removal of safety restrictions that Anthropic has placed on military use of its Claude AI models, as reported by Bloomberg. The move signals that the Pentagon views this as a political negotiation requiring campaign-style persuasion tactics rather than a technical compliance discussion. Plouffe's background is in political messaging and regulatory arbitrage—at Uber, he led efforts to circumvent local transportation regulations through public pressure campaigns—not in AI safety or military procurement.
The choice of negotiator reveals the government's strategy: treat AI safety restrictions as political obstacles to be overcome through pressure rather than legitimate technical constraints to be accommodated. This stands in stark contrast to traditional defense procurement, where contractor restrictions on weapon system use are typically negotiated through formal channels with military lawyers and program managers. According to The Guardian, the dispute centres on whether Anthropic can maintain restrictions preventing use of Claude for surveillance without judicial oversight and for autonomous weapons systems without human authorization—precisely the issues that drove Kalinowski's resignation from OpenAI.
Signals & Trends
Critical AI Infrastructure Now Explicitly Within Scope of Kinetic Warfare
The Iranian strikes on UAE datacentres represent more than isolated tactical actions—they establish that adversaries now view commercial AI infrastructure as legitimate military targets equivalent to communications nodes or power generation. This creates a forcing function for governments hosting major AI compute clusters: they must either provide military-grade defence for ostensibly civilian facilities or accept that private sector AI investment will migrate to jurisdictions with established air defence umbrellas. The targeting also suggests that nations without power projection capabilities may view disruption of adversaries' AI infrastructure as an asymmetric equaliser, potentially triggering a new category of infrastructure attacks distinct from traditional cyberwarfare. Policy implications include whether datacentres should be classified as critical infrastructure requiring government protection, and whether international law governing attacks on civilian infrastructure applies to facilities that may serve dual commercial and military purposes.
Government Coercion of AI Companies Shifting from Legal Frameworks to Political Pressure
The Pentagon's approach to both OpenAI and Anthropic reveals a deliberate strategy to bypass formal procurement processes and legal frameworks in favour of direct political and reputational pressure to secure unrestricted access to frontier AI systems. The deployment of David Plouffe—known for regulatory arbitrage, not defense contracting—signals that the government believes AI companies will respond to the same playbook used against Uber's municipal opponents: create political costs for resistance. This represents a significant departure from traditional defense procurement, where restrictions and use limitations are negotiated through formal channels with legal enforceability. The pattern suggests the Pentagon believes time pressure to deploy AI capabilities outweighs the value of legally binding use restrictions, and that companies' dependence on government goodwill for regulatory approval makes them vulnerable to informal coercion. The question for other jurisdictions is whether to establish clearer legal frameworks governing what restrictions companies can place on government use, or whether to remain in the current grey zone where informal pressure determines outcomes.
Explore Other Categories
Read detailed analysis in other strategic domains