Accountability Structures Cannot Keep Pace with AI Deployment Speed
Three simultaneous governance failures crystallised this week across distinct domains. The CDT audit documents federal agencies deploying AI in consequential decisions — immigration adjudication, law enforcement support, benefits processing — with no binding enforcement mechanism to compel the safeguards the OMB guidance nominally required. The Pentagon's $54 billion autonomous warfare request proceeds without updating its 2020 AI ethics principles or Directive 3000.09, despite operational context that bears no resemblance to the doctrine in force. And the Trump administration's retreat from its own Anthropic legal dispute confirms that the 'national champion' logic is now overriding enforcement calculus for frontier AI companies with perceived strategic value — effectively creating a two-tier system where scale and government proximity determine accountability exposure.
The enforcement vacuum is being partially filled from unexpected directions. Florida's criminal probe of OpenAI marks state attorneys general as the most aggressive active accountability actors in the US system — a pattern that mirrors how state-level enforcement drove tobacco and opioid accountability when federal agencies were passive. Meanwhile, the Metropolitan Police's Palantir discussions test whether UK data protection and procurement law are adequate governance instruments for high-risk AI adoption absent specific AI legislation. The common thread across all these cases is that the gap between stated policy and enforceable accountability is widening faster than any single jurisdiction is moving to close it.