Back to Daily Brief

Public Policy & Governance

10 sources analyzed to give you today's brief

Top Line

The EU AI Act's implementation is under active renegotiation via the Digital Omnibus trilogue, with CDT Europe warning that both Parliament and Council amendments risk weakening fundamental rights protections — making the current negotiating text a critical inflection point for the Act's real-world enforceability.

Anthropic's Claude Mythos model has triggered a White House rapprochement with the company, with the Trump administration now in direct talks with CEO Dario Amodei after previously denouncing the firm — a reversal driven by both national security anxiety over Mythos's cyber capabilities and interest in government deployment.

UK banks will receive access to Mythos within days despite it being withheld from the general public, prompting warnings from senior finance figures and highlighting the absence of a formal regulatory framework governing tiered access to high-risk AI models.

The Australian Federal Court has issued binding new rules on generative AI use in legal proceedings, with explicit financial and professional penalties for lawyers whose AI-generated errors mislead the court — one of the first common-law jurisdictions to move from guidance to enforceable court rules.

The UK government has committed its first £500m sovereign AI fund investment, with Technology Secretary Liz Kendall publicly downplaying cybersecurity and jobs risks at precisely the moment Anthropic's Mythos is generating cross-government anxiety about AI-enabled threats.

Key Developments

Anthropic-White House Détente: National Security Anxiety Overrides Political Feuding

Two months after President Trump publicly labelled Anthropic a 'woke' company run by 'leftwing nut jobs,' the White House hosted an introductory meeting with CEO Dario Amodei, with Politico reporting the Trump administration is actively considering how to deploy Mythos even as its feud with the company formally persists. The trigger is Mythos itself: the model's demonstrated cyber capabilities have spread anxiety across government agencies, creating both a threat calculus and a procurement interest that overrides prior political positioning.

The Institute for AI Policy and Strategy has formally characterised Mythos as a national security risk requiring urgent policymaker action, citing its status as Anthropic's most cyber-capable model to date. IAPS frames the broader trajectory of AI-enabled cyber capabilities as a structural policy problem, not merely a product-specific concern. The administration's dilemma is acute: Mythos represents both the threat vector and the potential defensive asset, and the White House lacks a coherent framework for managing that duality. The tiered access model Anthropic has adopted — currently limited to Amazon, Apple, Microsoft, and a small number of US firms — gives the company substantial leverage in these negotiations, since it controls who gets access and on what terms.

Why it matters

A formal government-Anthropic partnership on Mythos deployment would set a precedent for how the US government acquires and governs access to frontier AI models with dual-use risk profiles, bypassing the slower-moving legislative process entirely.

What to watch

Whether the White House engagement produces a formal procurement or access agreement with Anthropic, and whether Congress or any oversight body demands input into the terms under which a cyber-capable model is deployed across federal agencies.

Mythos Access Expands to UK Financial Sector Without Regulatory Framework in Place

Anthropic is extending Mythos access to UK banks within days, despite the model being withheld from public release due to assessed risk levels. The Guardian reports that senior finance figures have raised warnings about its impact, but no UK regulator — not the FCA, the PRA, nor the AI Safety Institute — has publicly articulated a compliance framework governing institutional access to models in this risk category. The Bank of England's existing AI guidance for financial institutions was not designed for models assessed as too dangerous for general release.

This creates a significant implementation gap. The UK government's position, articulated by Technology Secretary Liz Kendall in the same week, is to 'seize the opportunity' of AI, with the first £500m sovereign fund investment announced. The Guardian reports Kendall explicitly downplayed cybersecurity risks in public statements, a posture that sits in direct tension with both the IAPS national security assessment of Mythos and the finance sector's own expressed concerns. The regulatory architecture has not caught up with the access decisions being made at the commercial level.

Why it matters

The UK's willingness to allow tiered institutional access to a high-risk model ahead of any formal regulatory framework sets a permissive precedent that will be difficult to walk back and may undermine the credibility of the AI Safety Institute's risk assessment function.

What to watch

Whether the FCA or PRA issues any guidance specific to Mythos deployment in financial services, and whether the AI Safety Institute's assessment of Mythos is made public or informs access conditions.

EU AI Act Omnibus Negotiations: CDT Europe Flags Fundamental Rights Rollback Risk

The Centre for Democracy and Technology Europe has published formal feedback on both the European Parliament's and Council's negotiating positions in the ongoing trilogue on the AI Omnibus proposal, which is amending the AI Act's implementation architecture. CDT Europe argues that proposed amendments from both co-legislators risk weakening fundamental rights protections relative to the original Act — a significant finding given that the Parliament was widely seen as the Act's stronger rights advocate during initial passage.

The Omnibus process is being used to adjust compliance timelines, conformity assessment requirements, and the scope of high-risk system classifications. CDT's intervention signals that civil society organisations now regard the trilogue as a live threat to the Act's substantive content, not merely a technical implementation exercise. This is the governance reality of the AI Act in 2026: the headline legislation is passed, but its actual regulatory bite is being determined in granular trilogue negotiations that receive far less public scrutiny than the original drafting process.

Why it matters

If the Omnibus trilogue produces a weakened text — even incrementally — it will define the Act's enforcement ceiling for years, since reopening agreed provisions requires a new legislative cycle.

What to watch

The finalised trilogue text and whether CDT's specific objections regarding fundamental rights safeguards are addressed; also whether the European Data Protection Supervisor or other independent bodies issue formal opinions on the negotiating positions.

Australian Federal Court Issues Binding AI Rules for Legal Proceedings

The Federal Court of Australia has moved from guidance to enforceable rules governing generative AI use in legal proceedings, with explicit penalties — financial and professional — for lawyers whose AI-generated errors mislead the court. The Guardian reports the new rules 'embrace' AI use while creating a clear liability framework: the lawyer, not the tool, bears professional responsibility for accuracy. This is a materially different regulatory posture from the UK's current approach, where bar associations have issued guidance but no binding court rules exist.

The Australian move is notable for its institutional source: courts, not legislatures or technology regulators, are establishing the enforceable norms. This is consistent with a broader common-law pattern where judicial administration rules are proving faster to operationalise than AI-specific legislation. The US federal court system has seen individual district judges issue local AI rules, but no system-wide binding standard has emerged. Australia's Federal Court, as a single national institution, can set uniform rules more efficiently.

Why it matters

Court-issued AI rules establish enforceable compliance obligations in a high-stakes professional context faster than most legislative processes, and the Australian model may influence other common-law jurisdictions considering similar frameworks.

What to watch

Whether Australia's state supreme courts and the High Court adopt equivalent rules, and whether UK or Canadian courts follow with binding rather than advisory standards.

Signals & Trends

Tiered Access to High-Risk Models Is Becoming a De Facto Governance Mechanism — Without a Governance Framework

Anthropic's decision to release Mythos first to a curated list of large institutional clients — Amazon, Apple, Microsoft, then UK banks — is functioning as a private risk-management mechanism in the absence of any public regulatory framework for tiered access to frontier models. No jurisdiction has yet established legally binding criteria for who may access a model assessed as too dangerous for general release, what due diligence is required, or what liability attaches to institutional deployers. The UK's readiness to allow bank access while the AI Safety Institute remains publicly silent on Mythos risk assessment, combined with the US administration's ad hoc engagement with Anthropic rather than through any formal procurement or oversight channel, suggests that tiered access is hardening into a governance norm by default. Policymakers who want to influence this architecture need to act before commercial practice is fully entrenched.

National Security Framing Is Reshaping AI Governance Dynamics Faster Than Legislative Processes

The Mythos episode demonstrates that national security anxiety — not legislative mandates or regulatory timelines — is the fastest-moving driver of government AI policy in 2026. The White House's reversal on Anthropic, the IAPS national security framing of cyber-capable AI, and the finance sector's simultaneous appetite for and concern about Mythos access all reflect a governance environment where threat perception is outpacing institutional design. This is creating a pattern where executive branch agencies and intelligence community assessments are driving AI governance decisions that would ordinarily fall within legislative or regulatory competence. The risk is not that governments are ignoring AI risk, but that the security framing crowds out rights-based and competition-based governance considerations that require slower, more deliberative processes.

AI-Enabled Fraud in Regulatory and Legal Processes Is Emerging as an Enforcement Priority

Two separate developments this week point to AI being weaponised within formal regulatory and legal systems: the London nightclub case, where a businessman pleaded guilty to using AI-generated fictitious complainants to manipulate a Licensing Act process, with the Metropolitan Police characterising it as a 'growing issue'; and the Australian Federal Court's AI rules, driven partly by concern about AI-generated errors in legal submissions. These are not isolated incidents but early indicators of a systemic vulnerability — the integrity of administrative and judicial processes that depend on authentic submissions and accurate representations is being eroded by generative AI at scale. Regulators running public consultation processes, licensing authorities, and courts are the institutions most exposed, and few have detection mechanisms in place.

Explore Other Categories

Read detailed analysis in other strategic domains