Public Policy & Governance
Top Line
Anthropic filed federal lawsuits against the Department of Defense challenging its unprecedented designation as a supply-chain risk, a move the company claims could cost billions in revenue and which triggered support from AI researchers across OpenAI and Google.
A Guardian investigation exposed the UK government's AI infrastructure drive as heavily reliant on phantom investments, including announced datacentres still operating as scaffolding yards and chips counted multiple times across different funding announcements.
Canada reversed its previous security-based ban on TikTok operations, allowing the platform to continue operating in the country with no public explanation for the policy reversal.
Florida Governor DeSantis directed state agencies to partner with the Future of Life Institute to develop AI harm reporting mechanisms and crisis counselor training, representing one of the first state-level regulatory responses to AI companion app risks.
Key Developments
Anthropic launches legal challenge to Pentagon supply-chain risk designation
Anthropic filed two federal lawsuits on Monday challenging the Department of Defense's decision to label the AI firm a supply-chain risk, alleging the designation was unlawful and violated its First Amendment rights. The suits escalate a monthslong dispute over acceptable military use cases for the company's Claude AI system. According to Bloomberg, the company claims the designation threatens billions in revenue as potential customers paused deal negotiations following the announcement. Politico reports the designation prevents Anthropic from working with government entities and has wider business implications beyond federal contracts.
In an unprecedented show of cross-company solidarity, nearly 40 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit within hours of its filing. According to TechCrunch, signatories include Jeff Dean, Google's chief scientist and Gemini lead, signaling concern within the AI research community about government overreach. The Verge notes the brief details concerns over the Trump administration's approach to regulating AI companies. A Bloomberg report indicates a Pentagon official sees little chance of resuming negotiations with Anthropic following the legal challenge, suggesting the rift may be permanent.
UK government AI investment claims collapse under scrutiny
A Guardian investigation revealed the UK government's multibillion-pound AI infrastructure drive relies on phantom investments, misleading accounting, and announcements of facilities that do not yet exist. The investigation focused on two Nvidia-backed companies central to government plans: Nscale and HyperStack. An Essex site announced as a supercomputer facility remains a scaffolding yard, with planning permission only filed after the Guardian's inquiries. More critically, the investigation found chips were counted multiple times across different government funding announcements, and rental arrangements were presented as capital investments. The Guardian details how some companies achieved 350,000% profit margins by immediately reselling chips acquired through government programmes.
The revelations coincide with Nscale raising $2 billion at a $14.6 billion valuation and appointing former Meta executives Sheryl Sandberg and Nick Clegg to its board, as reported by The Guardian and Financial Times. The juxtaposition of the fundraising success with the missing infrastructure raises questions about whether investors are betting on government promises rather than operational assets. TechCrunch describes this as the largest deal of its kind for a European tech startup, though the company's actual operational footprint remains unclear.
Canada reverses TikTok ban without explanation
Canada announced it will allow TikTok to continue operating in the country, completely reversing the government's previous order that required the social media company to close its Canadian division for security reasons. Bloomberg reports the reversal came with no public explanation for the policy change. The original ban was based on national security concerns related to TikTok's ownership by Chinese company ByteDance, concerns that other Five Eyes nations continue to cite in their own restrictions on the platform. Lawfare covered legal challenges to the US TikTok sale mandate in recent days, highlighting the divergent approaches across allied nations.
Florida establishes first state-level AI harm reporting system
Governor Ron DeSantis directed Florida state agencies to partner with the Future of Life Institute to develop a Crisis Counselor Training Curriculum and statewide AI Harms Reporting Form targeting dangerous AI companion applications, according to Future of Life Institute. The directive represents one of the first concrete state-level regulatory responses to documented harms from AI companion apps, which have been linked to mental health deterioration in vulnerable users. The collaboration bypasses federal regulatory frameworks, instead establishing Florida-specific mechanisms for reporting and responding to AI-related incidents. The initiative follows growing concern about unregulated AI applications marketed to minors and vulnerable populations, though the announcement provides limited detail on enforcement mechanisms or mandatory reporting requirements.
Signals & Trends
Cross-company AI researcher solidarity signals concern over government overreach
The rapid mobilisation of AI researchers from competing companies to file an amicus brief supporting Anthropic represents a significant departure from normal industry competition. That Google DeepMind's chief scientist publicly supported a competitor against the Pentagon suggests the research community views the supply-chain risk designation as an existential threat to their ability to set technical boundaries around government use of their systems. This pattern of solidarity may indicate researchers fear any company could face similar punishment for refusing government demands, fundamentally shifting the power dynamic between AI labs and national security agencies.
Infrastructure announcements diverging from operational reality across multiple jurisdictions
The UK phantom investments investigation fits a broader pattern of governments announcing AI infrastructure projects that fail to materialise as described. Similar gaps between announced and delivered capacity have emerged in US datacenter projects and Gulf region AI investments. The pattern suggests governments are under political pressure to demonstrate AI leadership through headline-grabbing announcements, but lack the procurement oversight or technical expertise to verify what companies are actually building. This creates accountability gaps where billions in public investment or incentives flow toward projects that exist primarily in press releases.
Policy reversals without explanation undermine regulatory credibility
Canada's unexplained TikTok reversal follows a pattern of governments walking back AI and technology restrictions without providing the public assessment that justified the change. When security-based bans are reversed with no explanation of what changed in the threat landscape, it suggests the original decision may have been political rather than evidence-based, or that subsequent political or legal pressure overrode the security assessment. This pattern makes it difficult for companies to plan around regulatory requirements if decisions can be reversed without transparent process.
Explore Other Categories
Read detailed analysis in other strategic domains