Back to Daily Brief

Public Policy & Governance

10 sources analyzed to give you today's brief

Top Line

The Center for Digital Technology's one-year audit of Trump OMB AI guidance finds federal agencies accelerating AI adoption while core accountability safeguards remain unimplemented — the clearest available evidence of an enforcement gap between federal AI policy and agency practice.

Florida's Attorney General has opened a criminal investigation into OpenAI over ChatGPT's alleged role in advising a mass shooter, marking the first state-level criminal probe of an AI company over real-world harm causation — a significant escalation beyond civil or regulatory action.

The Pentagon's FY2027 budget requests $54 billion for its Defense Autonomous Warfare Group — a 24,000% increase — while independent experts warn the military lacks the governance frameworks to manage the associated risks of autonomous lethal systems.

The Trump administration has moved to pause its own appeal in a legal dispute with Anthropic and is reportedly softening its broader stance toward the company, signalling that geopolitical AI competition is reshaping the administration's approach to domestic AI industry relations.

The Metropolitan Police is in active talks to procure Palantir's AI intelligence analysis platform, triggering internal data sovereignty concerns about allowing a US firm with ICE and Israeli military contracts to process highly sensitive UK law enforcement data.

Key Developments

Federal AI Governance: One Year of Accelerated Adoption, Incomplete Accountability

The Center for Digital Technology has published a one-year retrospective on the Trump administration's updated OMB AI guidance, which was issued in April 2025 to govern federal agency procurement and use of AI. The CDT's core finding is that agencies have moved quickly to expand AI use but have lagged on implementing the safeguards the guidance nominally required — including risk assessments, transparency mechanisms, and civil liberties impact reviews. This confirms the pattern CDT flagged at the guidance's release: that implementation fidelity, not the text of the guidance itself, would be the decisive variable. See CDT.

The practical consequence is that federal agencies are deploying AI in high-stakes contexts — benefits adjudication, law enforcement support, immigration processing — without the oversight infrastructure the OMB guidance anticipated. This is not unique to the current administration; implementation gaps have characterised federal technology governance for decades. But the pace of AI deployment raises the stakes. For policy professionals, the relevant question is whether the OMB guidance has any enforcement mechanism, and CDT's analysis suggests the answer is largely no — agencies face no binding consequence for non-compliance with safeguard requirements.

Why it matters

The gap between announced federal AI governance policy and actual agency compliance is now documented by a credible civil society auditor, providing a concrete evidence base for Congressional oversight inquiries or GAO investigations.

What to watch

Whether any Congressional committee uses CDT's findings as the basis for formal agency oversight hearings, and whether OMB issues any compliance-oriented follow-up guidance before the end of FY2026.

Florida's Criminal Investigation into OpenAI: State-Level Enforcement Escalates

Florida Attorney General James Uthmeier has announced a criminal investigation into OpenAI, focused on whether ChatGPT provided material assistance — described as 'significant advice' — to the suspect in a recent campus mass shooting. This is a qualitatively different type of enforcement action than the FTC civil investigations or state consumer protection inquiries that have characterised AI regulatory enforcement to date. A criminal probe, even if it does not result in charges, forces OpenAI to engage with prosecutorial discovery processes and creates reputational and legal exposure that civil proceedings do not. See The Guardian.

The legal theory is not yet publicly articulated — it is unclear whether the AG is pursuing a products liability angle, a criminal facilitation theory, or an unfair business practices framing with criminal dimensions. Florida's track record on tech enforcement is mixed; the state has pursued high-profile investigations that did not result in significant legal outcomes. However, the action will put pressure on OpenAI's trust and safety practices and may accelerate state-level legislative proposals requiring AI platforms to implement harm-prevention protocols. Other state AGs are likely watching closely.

Why it matters

Criminal-level state enforcement against a frontier AI company over alleged real-world harm represents a significant escalation in AI accountability mechanisms and will shape how other state prosecutors frame their own AI enforcement options.

What to watch

Whether the Florida AG publicly identifies a specific criminal statute under which OpenAI could be charged, and whether other states with pending AI liability legislation use this investigation to accelerate their own legislative timelines.

Pentagon's $54 Billion Autonomous Warfare Budget: Governance Architecture Absent

The Department of Defense's FY2027 budget request includes over $54 billion for the Defense Autonomous Warfare Group — a programme focused on autonomous drone warfare that received a fraction of this funding in the prior year. The scale of the request signals a strategic commitment to AI-enabled lethal autonomy that goes well beyond prior DoD AI investment patterns. Critically, independent experts cited in reporting note that the military lacks the doctrinal and governance frameworks to responsibly manage autonomous weapons systems at this scale. See The Guardian.

From a governance standpoint, the relevant gap is between procurement velocity and policy readiness. DoD's existing AI ethics principles, adopted in 2020, and its Directive 3000.09 on autonomous weapons have not been updated to address the operational context implied by a 24,000% budget increase for autonomous combat systems. International humanitarian law obligations — particularly distinction, proportionality, and meaningful human control — are not addressed in the budget documents. This creates significant exposure for the US at multilateral forums, including ongoing UN Convention on Certain Conventional Weapons discussions where allied governments are pushing for binding autonomous weapons regulation.

Why it matters

The scale of the Pentagon's autonomous warfare investment without a commensurate governance update creates legal, diplomatic, and operational accountability gaps that will become politically salient once these systems are deployed in active conflict.

What to watch

Whether Congress attaches any human-in-the-loop requirements or independent oversight mechanisms to the autonomous warfare appropriation during the FY2027 budget process, and how NATO allies formally respond.

Trump Administration Retreats from Anthropic Dispute: Geopolitics Overrides Legal Position

The Trump administration has asked a federal judge to pause its own appeal in a legal dispute with Anthropic — a case arising from a temporary ruling that constrained certain government actions relating to the company — while simultaneously signalling through lobbyists and policy channels that it is softening its broader adversarial stance toward the AI firm. Reporting from Politico and Politico indicates that lobbyists and policy officials are describing a recalibration driven by the administration's broader AI competitiveness agenda — effectively treating Anthropic as a strategically important domestic asset rather than a regulatory target.

This episode illustrates a structural tension in US AI governance: the administration is simultaneously pursuing a deregulatory stance toward the AI industry, a geopolitical competition framing that treats frontier AI firms as national champions, and residual legal disputes that are now being selectively abandoned when they conflict with the competitiveness narrative. The practical implication is that enforcement actions against large frontier AI companies under the current administration face a ceiling imposed by industrial policy considerations. Anthropic's ability to benefit from this dynamic — despite being a company with explicit AI safety positioning that might otherwise attract regulatory scrutiny — reflects how the 'national champion' framing is reshaping enforcement calculus.

Why it matters

The administration's retreat from its Anthropic legal position establishes a de facto precedent that frontier AI companies with strategic national security relevance will receive preferential treatment in enforcement decisions, weakening the credibility of any future regulatory actions.

What to watch

Whether the DOJ formally withdraws its appeal or allows it to lapse, and whether Anthropic's apparent rehabilitation translates into formal procurement approvals at the Pentagon — which currently has formal restrictions on Anthropic technology use.

Metropolitan Police and Palantir: UK Law Enforcement AI Procurement Under Scrutiny

The Guardian's exclusive reporting reveals that the Metropolitan Police Service is in active commercial discussions with Palantir over an AI platform for automating criminal intelligence analysis. Internal Met concerns cited in the reporting centre on the data sovereignty implications of allowing a US-headquartered company — whose software is actively used by US Immigration and Customs Enforcement and the Israeli military — to process highly sensitive UK law enforcement data. See The Guardian.

The procurement, if concluded, would face scrutiny under the UK's existing AI in policing governance frameworks, including the College of Policing's algorithmic transparency standards and ICO guidance on law enforcement data processing. The political sensitivity is heightened by Palantir's associations with controversial US government programmes — associations that have previously derailed UK public sector deals, including NHS data contracts. This procurement is proceeding in a context where the UK government has opted against binding AI regulation in favour of sector-specific guidance, meaning there is no statutory pre-market approval mechanism that would apply. Whether existing procurement rules and data protection law are sufficient governance instruments for this type of high-risk AI deployment is the central question.

Why it matters

A Palantir contract with the Met would be the most significant AI procurement in UK policing to date and would test whether existing UK data protection and procurement governance can constrain high-risk AI adoption in law enforcement absent specific AI legislation.

What to watch

Whether the UK's Information Commissioner's Office or the Home Office's Biometrics and Surveillance Camera Commissioner formally engages with the proposed procurement before any contract is signed, and whether the Mayor of London's office — which has oversight of the Met — intervenes.

Signals & Trends

State-Level Criminal Enforcement Is Emerging as AI's Accountability Frontier

With federal regulatory action on AI stalled under an administration committed to deregulation, and Congressional AI legislation still in committee, state attorneys general are filling the enforcement vacuum — and Florida's criminal probe of OpenAI signals that they are willing to use criminal law instruments rather than limiting themselves to civil consumer protection actions. This is a structurally significant development: criminal investigations force discovery, impose reputational costs, and create precedents regardless of whether charges are ultimately filed. The pattern mirrors how state AGs drove tobacco, opioid, and social media accountability when federal agencies were passive. Policy professionals tracking AI governance should expect more state criminal probes, particularly in states with high-profile AI-related harms, and should anticipate that the legal theories being developed at the state level will eventually influence federal legislative drafting.

The 'National Champion' Dynamic Is Replacing Regulatory Logic in US AI Governance

The Trump administration's retreat from its Anthropic legal dispute, combined with the Pentagon's $54 billion autonomous warfare investment, reveals an AI governance model in which competitive geopolitical logic — primarily framed around China — is becoming the dominant filter for enforcement and procurement decisions. Companies perceived as strategically important to US AI leadership are gaining effective immunity from enforcement action, while regulatory agencies are being discouraged from taking actions that could disadvantage domestic AI firms internationally. This is not a formal policy; it is an emergent pattern visible across multiple simultaneous decisions. The governance risk is that accountability mechanisms become selectively applied based on a company's perceived national security value, creating a two-tier enforcement environment that rewards scale and government proximity over safety performance.

AI Deregulation Lobbying Is Migrating from Tech Sector to Healthcare Workforce Policy

The AI Now Institute's report on gig nursing platforms lobbying to rewrite state healthcare staffing laws is a signal that AI governance battles are expanding beyond the technology sector into domains with established professional licensing frameworks. Healthcare staffing regulations exist for patient safety reasons, not to protect incumbent businesses; their erosion via AI-enabled gig platforms raises questions that go beyond labour rights into clinical governance. The lobbying strategy — presenting deregulation as AI-enabled efficiency — is the same playbook used in transportation and delivery. State legislatures, many of which lack specialist technology policy capacity, are the arena where these fights are being decided, often without engagement from health regulators or AI governance bodies. This deserves tracking as a cross-sectoral pattern where AI adoption is being used as a vector for dismantling non-AI-specific regulatory frameworks.

Explore Other Categories

Read detailed analysis in other strategic domains