Public Policy & Governance
Top Line
Civil society groups are pressing Rep. Ted Lieu to firewall his bipartisan American Leadership in AI Act from broader state preemption efforts, revealing that federal AI legislation is becoming a vehicle for a politically charged jurisdictional fight that will determine whether states retain their ability to regulate AI independently.
The European Commission opened a formal public consultation on draft transparency obligation guidelines under the AI Act on May 8, closing June 3 — the first concrete implementation guidance to face external scrutiny, marking a shift from framework to enforceable compliance.
CDT and EPIC filed formal objections to HUD's expanded AI use in its customer relationship management system, flagging that a federal agency is deploying AI at scale against inadequate privacy safeguards and without meaningful public accountability.
A parliamentary warning that NHS England granted Palantir 'unlimited access' to identifiable patient data before adequate governance was in place signals a growing transatlantic pattern of procurement outpacing oversight in public sector AI adoption.
The Pentagon-Anthropic procurement dispute is now assessed to carry cascading effects on civilian federal agencies and sub-federal governments, exposing how a single high-profile contract dispute can fracture the entire government AI supply chain.
Key Developments
Federal Preemption Battle Fractures the US AI Legislative Landscape
A coalition including the Center for Democracy and Technology wrote to Rep. Ted Lieu (D-CA) urging him to ensure his bipartisan American Leadership in AI Act is not bundled with, or used as political cover for, broader congressional efforts to preempt state AI laws. The letter is a concrete intervention in an accelerating legislative fight: proponents of federal preemption argue that a patchwork of state laws creates compliance burdens for industry, while civil society groups and many state officials contend that state-level regulation — including laws covering algorithmic discrimination, automated decision-making in employment, and consumer protection — represents the only active layer of enforceable protection given federal inaction. CDT
This is a legislative process concern, not a substantive objection to Lieu's bill itself — the coalition expressed support for its underlying goals. The risk they are identifying is procedural: attaching a bipartisan, relatively uncontroversial measure to a sweeping preemption rider in order to force through the latter. This tactic has precedent in tech policy. The outcome of this fight will determine whether states such as California, Colorado, and Texas retain authority to enforce their own AI rules, or whether federal legislation creates a ceiling rather than a floor.
EU AI Act Enters Implementation Phase with First Formal Transparency Guidelines Consultation
The European Commission opened a consultation on draft guidelines covering transparency obligations under the AI Act on May 8, with a closing date of June 3. This is a narrow but significant procedural milestone: it represents the first set of implementation guidelines under the Act to be opened for structured external feedback, moving the regulation from political text to operational compliance standard. The transparency obligations in question cover requirements for providers to disclose when users are interacting with AI systems, and to label AI-generated content — areas where the draft guidelines will set the interpretive baseline that national market surveillance authorities are expected to apply. European Commission
A separate EU call for proposals under the Digital Europe Programme — DIGITAL-2025-AI-DATA-10-COMPLIANCE — is seeking projects developing digital tools for regulatory compliance through data, with an information session scheduled for June 8. Taken together, these two actions signal that Brussels is moving to build both the interpretive framework and the technical infrastructure for AI Act enforcement simultaneously, which is ambitious given that member state authorities are still building capacity.
NHS-Palantir Controversy Exposes Governance Failures in Public Sector AI Procurement
UK MPs issued a formal warning that NHS England has granted Palantir and other contractors access to identifiable patient data before adequate data governance frameworks were in place, in the course of building an AI-integrated NHS platform. The parliamentary intervention — characterising the arrangement as 'dangerous' — is a political escalation that carries real procedural weight: it creates a record for the Information Commissioner's Office and could trigger a formal parliamentary inquiry. The core governance failure identified is sequencing: procurement and data access preceded, rather than followed, the establishment of privacy safeguards and accountability mechanisms. The Guardian
This pattern — where urgency around AI deployment leads public sector bodies to grant commercial access before governance is complete — is not unique to the NHS. CDT and EPIC's objection to HUD's expanded AI use in its CRM system reflects an identical dynamic in the US federal context: a System of Records Notice was filed to expand AI capabilities and data collection categories, but civil society groups argue the privacy framework underpinning it is inadequate. CDT The convergence of UK and US examples in the same news cycle suggests a systemic procurement governance failure, not isolated incidents.
Pentagon-Anthropic Dispute Sends Shockwaves Through Government AI Procurement
CDT's analysis of the Pentagon-Anthropic procurement dispute broadens the frame beyond defence to examine its effects on civilian federal agencies and state, local, and tribal governments. The core concern is supply chain fragility: if a high-profile contract dispute between DoD and a leading frontier AI provider disrupts or conditions access to that provider's models, civilian agencies that have built workflows or procurement plans around those capabilities face cascading disruption. CDT argues the dispute also raises questions about whether AI providers can assert conditions on government use that effectively constrain lawful agency operations. CDT
This analysis intersects with the broader White House coordination problem identified by Politico, which reports that AI lobbyists are frustrated by what they describe as a lack of organisational coherence in Trump administration AI policy — specifically mixed signals on how rigorously new AI models will be vetted for government use. Politico The combination of a fragmented executive posture and a high-profile procurement dispute creates significant uncertainty for agencies trying to build durable AI procurement strategies.
AI Model Security: Executive and Legislative Actions on Distillation Attacks Fall Short
The Institute for AI Policy and Strategy published a policy memo assessing two recent US government actions on AI distillation attacks — the OSTP National Security Technology Memorandum and the proposed Deterring American AI Model Theft Act of 2026 — and concludes that both leave significant gaps. Distillation attacks allow adversaries to extract the capabilities of a powerful AI model by querying it and training a derivative model on the outputs, potentially circumventing export controls and IP protections. The memo argues that the NSTM lacks enforcement mechanisms and that the proposed legislation does not adequately address the technical vectors through which distillation occurs. IAPS
This is a concrete example of the implementation gap problem: both the executive and legislative actions exist and are directed at a real threat, but the technical specificity required to make them enforceable has not been achieved. The Foreign Policy analysis of definitional incoherence across jurisdictions — governments cannot agree on what AI is — compounds this problem: without agreed definitions of what constitutes a 'model,' 'distillation,' or 'theft' in a technical sense, legislation and executive orders struggle to produce actionable compliance obligations. Foreign Policy
Signals & Trends
Definitional Incoherence Is Becoming the Primary Structural Barrier to AI Governance
The Foreign Policy analysis that governments cannot agree on what AI 'is' is not a philosophical observation — it is a practical enforcement problem. Export control regimes require precise object definitions. Transparency obligations require agreed system classifications. Liability frameworks require clear demarcation of what constitutes an AI decision. The EU AI Act attempted to resolve this with a tiered risk-based taxonomy, but its definitions are already being contested in the transparency guidelines consultation. The US legislative landscape is producing multiple bills using incompatible definitional frameworks. The GUARD Act's shift from broad AI systems to 'AI companions' is a microcosm of a wider pattern: definitions are being negotiated in real time through the political process, creating windows where enforcement is technically impossible. Policy professionals should treat definitional clarity — not just political will — as a prerequisite for any enforceable AI governance regime.
Procurement Is Outpacing Governance Across Jurisdictions, Creating Retrospective Accountability Crises
The NHS-Palantir case and the HUD SORN objection are not coincidental. They reflect a structural pattern in which public sector institutions, facing pressure to demonstrate AI adoption, are granting data access and deploying AI systems before internal governance frameworks — privacy impact assessments, algorithmic accountability mechanisms, procurement oversight — are in place. Once commercial relationships and operational dependencies are established, remediation is costly and politically difficult. This pattern is likely to generate a wave of retrospective parliamentary, congressional, and inspector-general scrutiny over the next 12 to 18 months, as early-adopter agencies face accountability for decisions made in 2024 and 2025. The governance lesson is that sequencing matters: accountability frameworks must precede, not follow, data access grants.
The US State-Federal Preemption Fight Will Define the Effective Scope of AI Accountability for Years
The CDT coalition letter to Rep. Lieu is a visible manifestation of what is now the defining structural tension in US AI policy. With federal legislation stalled or limited in scope, states have become the primary site of enforceable AI accountability rules. Industry's preference for federal preemption — which would effectively cap state authority — is being pursued through procedural vehicles including bipartisan bills, reconciliation riders, and appropriations attachments. The outcome will determine whether the US AI governance landscape resembles the GDPR model (strong federal floor, state variation above it) or the early internet model (federal preemption that neutralised state consumer protection rules). Given the Trump administration's deregulatory posture, a federal preemption outcome that sets a weak ceiling is a plausible near-term scenario that state-focused advocates are actively working to prevent.
Explore Other Categories
Read detailed analysis in other strategic domains