Governments Are Buying AI Before They Can Govern It
Three distinct governance failures surfaced in the same news cycle across two continents. UK MPs warned that Palantir received identifiable NHS patient data before adequate data governance was in place. CDT and EPIC filed formal objections to HUD's AI expansion on identical grounds — capabilities deployed ahead of privacy safeguards. And CDT's analysis of the Pentagon-Anthropic procurement dispute warns of cascading effects on civilian agencies that built workflows around a supplier now embroiled in a high-profile contract controversy. The convergence is not coincidental: all three cases reflect the same sequencing error, where institutional urgency to demonstrate AI adoption overrides the foundational governance work that should precede data access grants.
The structural consequence is a coming wave of retrospective parliamentary, congressional, and inspector-general scrutiny over decisions made in 2024 and 2025. EU AI Act transparency guidelines entering their first public consultation simultaneously signals that even the world's most advanced regulatory framework is still translating political text into enforceable compliance standards — and member state enforcement capacity is still being built. The lesson for any public sector institution still planning AI deployments is unambiguous: accountability frameworks must precede, not follow, commercial data access, because remediation after operational dependencies are established is both costly and politically treacherous.