Voluntary AI Safety Frameworks Collapse Under Government Coercion
The simultaneous crises at OpenAI and Anthropic reveal that corporate AI safety commitments cannot withstand direct government pressure when framed as national security imperatives. Kalinowski's resignation over inadequate deliberation on Pentagon use cases demonstrates that internal governance structures — preparedness teams, board oversight, constitutional AI principles — can be overridden by commercial and defense relationships. Anthropic's standoff with the DoD over maintaining safety restrictions in classified deployments has escalated to the point where the Pentagon deployed political operatives rather than legal frameworks to force compliance. These parallel confrontations establish that voluntary safety measures only function in commercial contexts where companies retain autonomy; they have no binding force when governments assert national security requirements.
This collapse creates a bifurcation in the AI industry. Labs must choose between maintaining safety-first cultures that attract top technical talent but limit government revenue, or pursuing defense contracts that generate income but trigger departures of researchers who joined specifically for safety commitments. The strategic implication is that the US may face a choice between frontier capabilities developed by companies unwilling to support military applications, or accepting that leading labs will abandon the safety positioning that differentiated them commercially.