Public Policy & Governance
The Gist: Public Policy & Governance Brief
Thursday, March 05, 2026
Top Line
Pentagon AI governance vacuum exposed: OpenAI CEO Sam Altman admitted the company has no control over military use of its models, while Anthropic's Dario Amodei has resumed talks with the Pentagon after publicly breaking with them — revealing no clear framework exists for AI deployment in active combat operations. The Guardian, Financial Times
Trump's data center "pledge" is political theatre, not policy: Seven tech giants signed a non-binding commitment to self-supply electricity for AI infrastructure, but energy market experts say it lacks enforcement mechanisms and won't shield consumers from rate increases — the bipartisan concern it was designed to address. Politico, Wired
Industry mobilises against Anthropic national security designation: Trade groups representing Google, Apple and other majors are urging Trump to reverse his administration's move to label Anthropic a supply-chain risk, arguing the precedent threatens the entire sector — a rare unified front against executive action. Bloomberg
Google forced into structural changes by antitrust enforcement: The company unveiled a new Android app distribution system with rival access and lower developer fees to resolve US litigation and comply with EU requirements — a concrete example of regulatory pressure producing operational change, not just fines. Bloomberg
Senate probes Intel over sanctioned supplier ties: Six bipartisan senators demanded information on Intel's relationship with ACM Research, whose subsidiaries remain blacklisted by Commerce for national security reasons — showing Congress is actively enforcing existing export controls while chip subsidies flow. Bloomberg
Key Developments
Military AI Governance Crisis: No One Controls the Tech in Combat
Sam Altman told OpenAI employees Tuesday that the company "does not control how the Pentagon uses their artificial intelligence products in military operations," according to The Guardian. This admission comes as Bloomberg reports US Central Command is "turning to a range of artificial intelligence tools to quickly manage enormous amounts of data for operations against Iran," including targeting decisions. Meanwhile, Anthropic's Dario Amodei has resumed discussions with the Pentagon after their public split over AI safety concerns, according to the Financial Times, though TechCrunch notes US military continues using Claude models.
The sequence reveals a fundamental gap: commercial AI companies can decide whether to sell to DoD, but once they do, no regulatory framework governs deployment in active operations. OpenAI is scrambling to add "surveillance safeguards" post-hoc, the FT reports, but this is reactive rather than systematic. Amodei previously called OpenAI's messaging around the military deal "straight up lies," per TechCrunch, highlighting competing commercial interests masquerading as safety debates.
Why it matters: The US is deploying AI for lethal targeting decisions in Iran with no public accountability framework — Congress hasn't legislated rules of engagement, DoD hasn't published operational doctrine, and companies admit they have no visibility once models deploy.
What to watch: Whether Senate Armed Services or Intelligence committees demand classified briefings on AI targeting criteria, and if DoD publishes any policy on autonomous weapons thresholds.
Trump's Data Center "Pledge" Tests Limits of Voluntary Agreements
President Trump hosted Google, Meta, Microsoft, Oracle, OpenAI, Amazon and xAI executives Wednesday to sign a "rate payer protection pledge" requiring new data centers to self-supply electricity, according to Politico. The move addresses bipartisan concern about AI infrastructure driving up consumer electricity costs — a rare issue uniting rural Republicans and progressive Democrats.
But as Wired details, energy market experts say the pledge is structurally unenforceable: it's voluntary, contains no compliance mechanisms, and the Federal Energy Regulatory Commission — which actually governs interstate electricity markets — wasn't involved. The Financial Times quotes Trump admitting data centers need "PR help," framing this as reputation management rather than policy.
State-level action is where real regulatory pressure exists: utilities commissions in Virginia, Georgia and Texas are already reviewing data center interconnection rules, and those decisions carry enforceable consequences. This White House event creates cover for continued expansion while states do the actual governing.
Why it matters: It establishes a precedent that symbolic voluntary commitments can substitute for actual regulation — a model the administration may repeat across AI governance domains.
What to watch: Whether state Public Utility Commissions reference this pledge in their proceedings, or whether they proceed with enforceable requirements regardless of federal theatre.
Tech Industry Unites Against Anthropic National Security Label
Trade groups including those representing Google and Apple are urging Trump to reverse the administration's designation of Anthropic as a national security supply-chain risk, Bloomberg reports. The designation appears related to Anthropic's refusal to continue its Pentagon contract over AI safety disagreements — a stance that led OpenAI to swoop in for the deal.
This is the first time the administration has applied national security designations to an AI company based on refusal to cooperate with military use cases rather than foreign ownership or technical vulnerabilities. The industry response — unified rather than competitors staying silent — suggests fear of precedent: if refusing specific government contracts triggers supply-chain restrictions, it eliminates companies' ability to set their own acceptable use policies.
The timing is notable given Trump's Wednesday photo-op with tech CEOs. Industry is clearly willing to sign aspirational pledges but drawing a line when executive power threatens operational autonomy. The specific trade groups involved and their membership rosters matter for understanding which companies are most exposed to similar treatment.
Why it matters: It tests whether the executive branch can use national security authorities to compel private sector participation in military AI programs — a much harder line than export controls or procurement requirements.
What to watch: Whether Commerce Department formally responds or the designation quietly disappears; either outcome sets precedent for how this administration uses national security labels as leverage.
Google's Antitrust Settlement Produces Structural Changes
Google unveiled a new Android app distribution system Wednesday offering easier third-party access and reduced developer fees, Bloomberg reports, to resolve US antitrust litigation and comply with EU Digital Markets Act requirements. Unlike previous agreements that produced only monetary penalties, this represents operational restructuring: competitors can now access Play Store's app catalog, and developers get pricing flexibility.
The dual pressure of US courts and EU regulation produced concrete change where either alone might not have — Google is effectively implementing DMA-style requirements globally rather than maintain separate systems. This matters for AI governance precedent: it shows that structural remedies, not just fines, are achievable when regulators coordinate pressure points.
The settlement still requires judicial approval, and the devil will be in implementation details: what "access" means technically, what review processes apply, and whether Google can impose terms that recreate control through contracts rather than code. Past consent decrees have foundered on vague language that companies interpret favorably.
Why it matters: First major tech platform forced into structural operational changes by coordinated transatlantic enforcement — a potential model for AI model access and interoperability requirements.
What to watch: Court approval terms and whether EU assesses this meets DMA obligations, or demands additional changes — the latter would show regulatory divergence even when companies try to converge.
Congress Enforces Existing Export Controls While Subsidies Flow
Six bipartisan senators — names unreported but bipartisan composition matters — pressed Intel for information about relationships with ACM Research, whose subsidiaries remain on Commerce Department's Entity List for national security reasons, Bloomberg reports. ACM was blacklisted in 2024 for activities threatening US security interests.
This represents enforcement of existing authorities rather than new legislation, showing Congress actively monitoring implementation while it debates broader chip policy. Intel is receiving billions in CHIPS Act subsidies, and senators are asserting oversight: federal funding creates leverage for enforcing compliance with export controls and entity list restrictions.
The timing suggests coordination with Commerce Department enforcement staff who likely flagged the relationship. It's also notable that Intel, not a smaller player, is the target — sending a message that subsidy recipients face heightened scrutiny. The bipartisan composition indicates this isn't partisan positioning but genuine security concern.
Why it matters: Demonstrates Congress can use appropriations oversight to enforce AI supply chain security requirements without passing new laws — existing authorities have teeth if legislators actively monitor.
What to watch: Intel's response and whether Commerce formally investigates or issues additional guidance on what supplier relationships are permissible for CHIPS Act recipients.
Signals & Trends
Regulatory pressure converging on platform accountability for AI outputs: The first wrongful death lawsuit against Google over Gemini-induced harm, BBC reports, arrives as regulators in multiple jurisdictions investigate platform AI. Ireland's Data Protection Commissioner contacted Meta over workers viewing intimate videos from AI-enabled glasses, per BBC. These aren't isolated incidents but early tests of whether existing liability frameworks (product liability, data protection) apply to AI harms, or whether new statutory authority is needed. Watch for whether courts require companies to prove safety testing before deployment — shifting from harm response to prevention mandates.
AI transparency requirements emerging piecemeal by sector: Apple Music will add labels distinguishing AI-generated music, TechCrunch reports, though it's opt-in by labels. This follows EU AI Act transparency requirements and California's AB 3211 mandating synthetic media labeling. The pattern is sector-specific transparency rules proliferating faster than comprehensive frameworks — creating compliance complexity but also precedent for what "meaningful transparency" looks like. The opt-in design shows industry preferring self-regulation to avoid mandates, but inadequate voluntary approaches create pressure for statutory requirements.
US losing leverage over AI governance as Europe leads on enforcement: Google's Android restructuring responds primarily to EU DMA compliance, with US antitrust resolution following. European Parliament advancing concrete AI Act implementation guidance while US debates voluntary commitments. China's companies facing supply chain scrutiny from EU as well as US. The governance center of gravity is shifting away from Washington — US retains military procurement leverage and export control authority, but on commercial AI deployment rules, Brussels is setting global standards. This matters for US companies' strategic planning: designing for EU compliance may be more important than tracking Congressional proposals that rarely become law.
Explore Other Categories
Read detailed analysis in other strategic domains