Back to Daily Brief

Public Policy & Governance

10 sources analyzed to give you today's brief

Top Line

The Pentagon has contracted seven major AI firms — including OpenAI, Google, Nvidia, Microsoft, Amazon Web Services, SpaceX, and Reflection — for classified military applications under 'any lawful use' terms, marking a significant hardening of the U.S. government's posture on military AI adoption and notably excluding Anthropic amid a reported dispute over misuse concerns.

Democratic state lawmakers in Massachusetts are publicly pushing back against federal AI preemption efforts, signaling that bipartisan consensus on a federal AI framework that overrides state law remains structurally out of reach in the near term.

The EU AI Act's deepfake prohibition is being actively shaped through the AI Omnibus procedure, with civil society groups including AlgorithmWatch pressing for accountability frameworks that bind AI companies, platforms, and individual perpetrators simultaneously.

A harmonized risk-reporting standard has been proposed by IAPS to bridge California's SB 53, the federal RAISE Act, and the EU AI Code of Practice, targeting the governance gap around frontier models in pre-release internal deployment.

The first UN Global Dialogue on AI Governance has opened, with civil society organizations using the forum to push linguistic diversity as a structural equity issue in global AI governance architecture.

Key Developments

Pentagon's 'Any Lawful Use' Military AI Contracts Set a Permissive Governance Precedent

The U.S. Department of Defense has signed agreements with SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services to deploy AI capabilities for classified military purposes, with contracts explicitly permitting 'any lawful use' of the technology. The breadth of that framing is the critical governance detail: it delegates discretion over use-case boundaries to the companies themselves, subject only to existing law rather than any bespoke DoD ethical AI framework or use-case restrictions. The Guardian reports that Anthropic was excluded, with a public dispute over potential AI misuse cited as the reason — a notable signal that at least one major frontier lab is maintaining a harder line on military applications.

This contrasts sharply with the EU's approach under the AI Act, which classifies certain AI uses in military and law enforcement contexts under high-risk or prohibited categories and requires conformity assessments. The U.S. government has no equivalent statutory framework governing military AI procurement. The Pentagon's own Responsible AI principles from 2020 are non-binding guidance, not enforceable compliance mandates, meaning the 'any lawful use' standard is effectively the operative constraint. The exclusion of Anthropic, and the implicit competitive pressure on other labs to accept permissive terms to maintain government contracts, creates a structural incentive problem that no current U.S. legislation addresses.

Why it matters

The DoD's permissive contracting terms establish a de facto governance standard for military AI that sidesteps the emerging domestic and international regulatory frameworks, setting a precedent that will be difficult to reverse once operationally embedded.

What to watch

Whether Congress moves to impose statutory guardrails on military AI procurement, or whether the Anthropic-DoD dispute resolves in a way that signals where the actual red lines are in practice.

Federal AI Preemption of State Law Faces Democratic Fracture — Massachusetts Lawmakers Lead Pushback

A letter from Massachusetts state legislators opposing federal AI talks — reported by Politico — is the most concrete recent indicator that Democratic coalition support for a federal AI framework that preempts state-level regulation is fracturing along predictable lines. State legislators who have invested political capital in their own AI governance bills view federal preemption not as a floor but as a ceiling, and the Massachusetts letter makes that position explicit. This follows similar dynamics in California, where Governor Newsom's veto of SB 1047 in 2024 did not end the state-level legislative push — it accelerated it, producing SB 53 and a pipeline of successor bills.

The structural problem for federal AI legislation is that the Republican-controlled Congress's primary interest in preemption is deregulatory — removing state-level obligations rather than establishing new federal ones. Democratic state lawmakers understand this trade-off clearly, and their opposition reflects a rational calculation that no federal floor is preferable to a federal ceiling that locks out stricter state action. For policy professionals tracking legislative timelines, this dynamic makes broad federal AI legislation with preemption provisions significantly harder to advance in the current session.

Why it matters

Democratic state-level resistance to federal preemption removes a key political building block for comprehensive federal AI legislation, effectively extending the period of fragmented, state-led AI governance in the United States.

What to watch

Whether federal AI legislation moves forward with narrower preemption scope — limited to specific sectors or use cases — as a compromise to bring Democratic votes, or stalls entirely at committee level.

Fine-Tuning Governance Gap: CDT and MIT Research Flags Safety Drift Risk in Downstream AI Adaptation

A joint report from the Center for Democracy and Technology and MIT's Algorithmic Alignment Group finds that fine-tuning foundation models for specialized applications produces unpredictable safety drift — meaning downstream developers may inadvertently or deliberately degrade safety properties established in the base model. CDT frames this as a critical governance gap: current regulatory frameworks, including the EU AI Act and U.S. state-level proposals, primarily address the foundation model developer, but accountability for fine-tuned variants remains underspecified. The EU AI Act's provisions on high-risk AI systems place obligations on deployers, but the technical mechanism of safety degradation through fine-tuning is not explicitly addressed in the Act's conformity assessment requirements.

This research dovetails with a separate IAPS proposal for a harmonized risk-reporting standard covering pre-release internal model use across California's SB 53, the federal RAISE Act, and the EU Code of Practice. IAPS argues that frontier companies run their most capable models internally for weeks before public release, creating a reporting blind spot in all three frameworks. Together, these two research outputs identify two concrete points in the AI development and deployment chain — internal pre-release use and post-release fine-tuning — where existing governance instruments have no effective reach.

Why it matters

If fine-tuning reliably degrades safety properties in ways current compliance frameworks cannot detect or assign liability for, the entire architecture of foundation-model-level regulation becomes insufficient as a risk management strategy.

What to watch

Whether EU AI Office guidance on the Code of Practice, or U.S. state legislation, moves to explicitly address fine-tuning accountability chains in the next legislative cycle.

EU AI Act Deepfake Ban Implementation Enters Contested Regulatory Terrain

AlgorithmWatch has submitted recommendations through the AI Omnibus procedure to operationalize the EU AI Act's prohibition on AI-generated deepfakes used in sexualized violence contexts. AlgorithmWatch argues that effective implementation requires simultaneous accountability for three actor categories — AI developers, hosting platforms, and individual perpetrators — and that current enforcement proposals focus too narrowly on platform-level takedown obligations at the expense of upstream developer liability and downstream criminal accountability. This is a live regulatory design question: the AI Act's prohibited practices provisions are in effect as of February 2025, but national competent authorities across EU member states have uneven enforcement capacity and divergent interpretations of where the primary compliance burden falls.

The AI Omnibus procedure, which is the vehicle for refining and supplementing the AI Act's implementation, is emerging as the key battleground for these design choices. Civil society organizations are using it to push for stronger victim protection mechanisms, while industry associations are pressing for narrower, more predictable liability standards. The outcome will set precedent for how the EU handles the broader class of AI-enabled harms that cross the developer-platform-user accountability chain.

Why it matters

How the EU resolves the multi-actor accountability question for AI-generated sexual abuse material will establish the liability template for other categories of AI-enabled harm under the AI Act, with direct implications for compliance program design across the industry.

What to watch

The European Commission's formal guidance on Article 5 prohibited practices enforcement, expected through 2026, and whether member state authorities begin coordinated enforcement actions that reveal the operative interpretation.

Signals & Trends

A Governance Accountability Gap Is Opening Between Foundation Model Developers and the Downstream Deployment Chain

Multiple research outputs this week — CDT and MIT on fine-tuning safety drift, IAPS on pre-release internal use reporting — converge on the same structural finding: AI governance frameworks are disproportionately focused on the moment of public model release, leaving both the period before release and the post-release adaptation chain in a regulatory vacuum. The EU AI Act's conformity assessment obligations and U.S. state-level developer duties are both anchored to the foundation model as released, not as modified or internally used. As frontier models become more powerful and more widely fine-tuned for specialized applications, this gap will generate increasingly significant real-world harms that fall outside the reach of existing compliance mandates. Regulators who want to close this gap face a technical challenge — audit and reporting requirements for fine-tuned variants are difficult to operationalize at scale — and a political one, as downstream developers are a much larger and more diffuse constituency than foundation model labs.

U.S. Military AI Contracting Is Outpacing Governance Frameworks and Creating Irreversible Facts on the Ground

The Pentagon's 'any lawful use' contracts with seven AI companies represent a pattern, not a one-off procurement decision. The U.S. government is operationalizing AI in classified military contexts faster than any statutory or regulatory framework can catch up. The exclusion of Anthropic — and the implicit signal that companies willing to accept broader use terms get access to lucrative government contracts — creates a race-to-the-bottom dynamic among frontier labs on military AI ethics. This is occurring in a political environment where Congress has shown no appetite for statutory constraints on military AI procurement, and where the DoD's internal responsible AI principles carry no enforcement weight. International arms control frameworks have no current mechanism to address AI-enabled military capabilities. Policy professionals should track whether this procurement pattern becomes a template for allied governments, particularly through NATO or Five Eyes coordination, as that would embed permissive military AI governance norms across multiple jurisdictions simultaneously.

The U.S. State-Federal AI Governance Tension Is Entering a Decisive Phase With No Clear Resolution Mechanism

The Massachusetts lawmakers' letter is one data point in a broader pattern: state-level AI governance is accelerating precisely as federal preemption efforts intensify, and the political coalitions for a negotiated settlement between state and federal approaches do not currently exist. California's SB 53 is in force; other states have active pipelines. Federal legislation with broad preemption cannot pass without Democratic votes that state-level lawmakers are now explicitly withholding. The result is a period of sustained jurisdictional fragmentation that creates genuine compliance complexity for AI developers operating nationally, and that will increasingly produce direct legal conflicts between state mandates and federal agency positions. The EU, by contrast, has resolved this through the AI Act's explicit preemption of member state AI-specific legislation — a governance architecture the U.S. federal system structurally cannot replicate without a constitutional framework that does not exist.

Explore Other Categories

Read detailed analysis in other strategic domains