When Anyone Can Launch a Cyberattack or Deepfake: The Accessibility Threshold Has Broken
Two independent findings this week confirm a unified pattern. North Korean actors of previously mediocre technical capability used AI tools to generate functional malware, build social engineering infrastructure, and steal $12 million in three months. Separately, frontier models tested on phishing and social engineering tasks performed at levels security researchers found alarming. At the same time, MIT Technology Review documents that deepfake weaponisation has moved from theoretical to operational, driven not by a capability breakthrough but by the combination of quality improvement and commoditisation — free or cheap tools accessible without specialist skill. The threat vector is the same in both cases: accessibility, not sophistication.
Existing enterprise security, fraud, and identity verification frameworks were calibrated against the assumption that serious AI-enabled attacks required serious technical resources. That assumption is now invalidated. Authentication systems, executive communication protocols, and media verification workflows all require reassessment against a threat actor population that is orders of magnitude larger than the state-level adversary set that shaped current defensive postures. The defence gap is structural, not a product roadmap question — no reliable real-time deepfake detection exists at consumer-accessible quality levels, and AI-assisted social engineering operates at human speeds that outpace most detection and response cycles.