Back to Daily Brief

Public Policy & Governance

13 sources analyzed to give you today's brief

Top Line

Baltimore has filed a consumer protection lawsuit against xAI over Grok's generation of nonconsensual sexual images, marking the first major municipal legal action targeting AI model outputs rather than just deepfakes distribution.

The Trump administration's December executive order attempting to preempt state AI regulation through funding threats and litigation sets up a constitutional federalism battle ahead of midterms, with no major legislative champions emerging on either side.

Eight months after announcing an OpenAI partnership touted as transformative for public services, the UK government has conducted zero trials of the technology, exemplifying the gap between AI procurement announcements and actual deployment.

Palantir has secured access to Financial Conduct Authority regulatory data for fraud investigation, extending the U.S. company's reach across NHS, police, military, and now financial regulatory intelligence with minimal public oversight of data sharing terms.

Anthropic's legal challenge to its Pentagon supply chain ban creates the first direct judicial test of whether the executive branch can weaponise procurement decisions to compel AI companies to support autonomous weapons development.

Key Developments

Municipal governments pursue novel AI liability theories through consumer protection law

Baltimore's lawsuit against xAI targets the company's marketing of Grok as a general-purpose assistant while allegedly failing to disclose risks of generating nonconsensual sexual content, according to The Guardian. The legal theory relies on consumer protection statutes rather than existing image-based abuse laws, potentially establishing precedent for holding AI model developers liable for foreseeable harmful outputs rather than just their distribution. This represents a shift from targeting individual deepfake creators to pursuing the infrastructure providers, though the case faces significant Section 230 immunity questions that Baltimore will need to overcome.

The timing coincides with increasing reports of AI-generated nonconsensual imagery, but Baltimore's decision to act at the municipal level reflects the vacuum left by federal inaction. The lawsuit may prompt other cities to pursue similar actions, creating a patchwork of local AI liability standards that could pressure Congress to establish federal frameworks — or alternatively, provide ammunition for industry arguments that only federal preemption can prevent regulatory fragmentation.

Why it matters

If successful, this establishes a consumer protection pathway for AI harms that bypasses the need for new AI-specific legislation and could be replicated by hundreds of municipalities.

What to watch

Whether the court accepts that failure to disclose generative risks constitutes deceptive marketing, and whether xAI raises Section 230 defences successfully.

Trump's executive order on state AI regulation forces federalism confrontation

The December executive order directing federal agencies to sue states attempting AI regulation and withhold funding represents the administration's most aggressive move to establish federal preemption, as reported by The Guardian. The order explicitly supports industry lobbying against state-level constraints, but provides no alternative federal regulatory framework — creating a deliberate void rather than harmonisation. Several states including California and Colorado have existing AI laws potentially affected, though no lawsuits have been publicly filed yet.

The constitutional question centres on whether AI regulation falls under traditional state police powers or whether federal commerce clause authority allows preemption absent specific federal standards. The order's funding threats echo previous failed attempts to coerce states on immigration enforcement, which courts rejected. What makes this particularly notable is the absence of strong legislative champions on either side ahead of midterms — Republicans are divided between libertarian and national security factions, while Democrats lack a coherent position beyond vague calls for 'guardrails'.

Why it matters

This creates legal uncertainty for any state attempting to regulate AI and may accelerate demand for federal legislation simply to establish what rules actually apply.

What to watch

Whether any state challenges the order in court before federal agencies initiate litigation, and whether Republican governors join Democratic states in defending state authority.

UK government AI partnerships deliver political announcements but not operational deployments

Freedom of Information requests reveal the UK government has not conducted any trials involving OpenAI technology eight months after signing a memorandum of understanding promoted as enabling 'AI-led public service reform', reports The Guardian. This follows a pattern where ministerial announcements generate headlines but implementation stalls due to procurement rules, data protection requirements, and departmental capacity constraints. The OpenAI agreement contained no binding commitments or timelines, functioning primarily as a statement of intent.

Meanwhile, Palantir has secured an FCA contract to analyse financial regulation data for fraud detection, according to The Guardian, demonstrating that established government contractors with existing security clearances can rapidly expand access while high-profile partnerships languish. Palantir now holds contracts across NHS, police, military, and financial regulatory data — creating a single vendor concentration risk that privacy groups have consistently opposed but ministers continue to ignore. The FCA contract terms, including data retention and access controls, remain undisclosed.

Why it matters

The implementation gap between announced partnerships and operational deployments undermines the government's credibility on AI readiness and suggests procurement theatre rather than transformation.

What to watch

Whether the OpenAI partnership produces any concrete pilots before the next election cycle, and whether Parliament scrutinises Palantir's expanding data access.

Pentagon weaponises procurement to force AI company compliance with autonomous weapons development

Anthropic's preliminary injunction hearing on 24 March challenges the Department of Defense's supply chain designation that prohibits federal agencies and contractors from using its AI models, as covered by The Guardian and Lawfare. The designation followed Anthropic's refusal to allow its technology to support autonomous weapons systems, representing the administration's use of procurement restrictions as coercive tools rather than security-based exclusions. The legal question is whether the executive branch can impose broad bans on commercial AI services based on a company's refusal to support specific military applications rather than on security vulnerabilities or foreign ownership.

This differs fundamentally from cases like the TikTok ban, which rested on foreign adversary concerns, as noted in Lawfare's archive discussion. The Pentagon's action suggests a new doctrine where AI companies must either accept military end-uses or face exclusion from all federal business, including civilian research and administrative functions. If upheld, this establishes precedent allowing the executive to punish private sector AI governance decisions through procurement leverage without congressional authorisation.

Why it matters

The case determines whether AI companies can maintain use-case restrictions without facing retaliatory federal procurement bans, directly affecting the viability of principled AI development policies.

What to watch

The court's analysis of whether supply chain designation authority extends to punishing commercial decisions unrelated to security risks, and whether other AI companies adjust their military collaboration policies in response.

Export control enforcement gaps undermine stated policy of slowing Chinese AI development

Analysis from the Institute for AI Policy and Strategy concludes that H200 exports to China would 'substantially boost Chinese frontier AI development and deployment capabilities' for military applications, with verification requirements being 'unenforceable' once chips enter China, according to IAPS research. This assessment directly contradicts BIS claims that end-user controls can prevent diversion to military uses. The research argues that semiconductor manufacturing equipment controls have proven more effective at maintaining U.S. advantages, as detailed in separate IAPS analysis, because equipment is harder to conceal and repurpose than individual chips.

The gap between stated export control policy — preventing Chinese military AI advancement — and enforcement reality suggests either wilful blindness or capacity limitations at BIS. Industry has consistently lobbied for relaxed chip export rules citing competitiveness concerns, and the H200 case indicates those arguments may be prevailing over security assessments. The Commerce Department has not publicly responded to the IAPS analysis or adjusted export license policies.

Why it matters

Export controls represent the primary U.S. tool for maintaining AI capability advantages over China, and enforcement failures undermine the entire strategic rationale while creating false confidence in policymakers.

What to watch

Whether BIS tightens H200 export conditions in response to these findings, or whether industry access concerns continue to dominate over national security assessments.

Signals & Trends

AI procurement is becoming a political weapon rather than a technical process

The Anthropic-Pentagon case and UK Palantir expansions demonstrate procurement decisions driven by political compliance requirements rather than technical merit evaluations. When choosing not to support autonomous weapons results in categorical federal bans, and when a single vendor achieves cross-agency data access without competitive processes, procurement becomes patronage. This undermines the stated goals of both national security (by excluding capable providers over policy disputes) and value for money (by entrenching incumbents). The trend suggests governments are using AI contracts to enforce political alignment rather than optimise capabilities.

Municipal and state governments are filling federal AI governance vacuum through unconventional legal theories

Baltimore's consumer protection lawsuit and state resistance to federal preemption indicate sub-national governments are finding creative paths around congressional inaction. Consumer protection, insurance regulation, and employment law all provide existing statutory frameworks that can be interpreted to cover AI harms without waiting for AI-specific legislation. This creates regulatory fragmentation that industry opposes but also generates case law and practical experience that may inform eventual federal action. The pattern resembles privacy law development, where state action forced GDPR-like compliance burdens despite federal gridlock.

Healthcare AI deployment is prioritising cost reduction over clinical safety validation

Kaiser Permanente's use of AI screening systems that allegedly delay patient access to therapists, as reported by The Guardian, exemplifies healthcare AI deployment optimising for administrative efficiency rather than patient outcomes. Therapists claim patients experiencing severe mental health crises are being routed through AI triage instead of emergency services, with Kaiser defending the system as delivering 'timely, high-quality care' without providing outcome data. This mirrors broader healthcare AI patterns where cost savings are measurable and valued while clinical safety impacts remain opaque until adverse events force scrutiny.

Explore Other Categories

Read detailed analysis in other strategic domains