The Pentagon's 'AI-First' Doctrine Is Outrunning Every Governance Framework
The Pentagon's 'any lawful use' contracting terms with seven AI firms represent an enacted governance standard, not a temporary procurement decision. The operative constraint on military AI use is now existing law rather than any bespoke ethical framework, and the exclusion of Anthropic — the one lab that objected to broad terms — creates a structural incentive for competitors to accept permissive conditions to preserve government contract access. The concurrent appointment of the former Pentagon think-tank head to Anthropic's leadership, and the deeper revolving door between national security establishments and frontier labs, is hardening the alignment between commercial AI development priorities and US defence doctrine in ways that will shape capability trajectories for years.
For allied governments, the 'AI-first' doctrine creates an interoperability imperative and an architectural dependency risk simultaneously. For strategists tracking escalation dynamics, AI-accelerated military decision cycles compress crisis windows in scenarios like Taiwan or the South China Sea. No current international arms control mechanism addresses AI-enabled military capabilities, and the DoD's own responsible AI principles remain non-binding guidance. The precedent being set through procurement is therefore the operative governance standard by default — and it is one that other advanced economies, particularly through NATO and Five Eyes coordination, may feel pressure to replicate.