Public Policy & Governance
Top Line
UK water sector acquisitions will now require government national security screening, while commercially available AI systems are being removed from mandatory investment screening lists — a significant policy divergence that signals differentiated risk assessment frameworks for critical infrastructure versus technology sectors.
The UK's Treasury committee has launched an inquiry into student loans amid what its chair calls a 'perfect storm' for young people, as broader economic pressures compound with technology-driven workforce transformation concerns.
At least 11 African governments have spent over $2 billion on Chinese-built AI-powered mass surveillance systems that experts warn are violating citizens' privacy rights and chilling civil society, marking the most significant deployment of state surveillance infrastructure on the continent.
UK fraud prevention body Cifas reports AI-enabled scams drove fraud cases to a record 444,000 in 2025, with criminals using AI tools to execute account takeovers at what the organisation describes as 'industrialised' scale.
Key Developments
UK Diverges on National Security Screening: Critical Infrastructure In, Commercial AI Out
The UK government will now refer all water sector takeovers for mandatory national security screening, while simultaneously removing commercially available AI systems from the mandatory investment screening list, according to Financial Times. This dual-track approach represents a significant policy articulation distinguishing physical infrastructure risks from technology sector oversight. The move comes as part of a broader government initiative to reduce regulatory burden on technology investment while tightening controls on strategic physical assets. Water sector screening had previously been discretionary, but concerns over critical infrastructure vulnerability — particularly following repeated cyber incidents and quality failures — have prompted mandatory review.
The removal of commercial AI from screening suggests the government is attempting to signal openness to AI investment while maintaining control over genuinely sensitive capabilities. However, the policy creates ambiguity around what constitutes 'commercially available' versus custom or dual-use AI systems. Defence and intelligence AI applications presumably remain subject to screening, but the boundary is not clearly delineated in available reporting. This distinction matters as companies increasingly offer both commercial and bespoke AI services, and as foundation models trained for civilian use are routinely adapted for sensitive applications. The policy adjustment appears designed to address investor concerns about regulatory friction in AI development, but may create implementation challenges for the Investment Security Unit tasked with making case-by-case determinations.
African Mass Surveillance Expansion: $2 Billion Chinese Infrastructure Deployment
Eleven African governments have collectively spent over $2 billion acquiring Chinese-built AI-powered mass surveillance infrastructure, including facial recognition systems and tracking technology, according to a new report cited by The Guardian. Human rights and technology experts characterise the deployments as 'invasive' and warn they violate citizens' privacy rights while producing chilling effects on civil society activity. The report — timing and authoring organisation not specified in available coverage — represents the first comprehensive accounting of surveillance infrastructure investment across the continent, revealing a pattern of technology transfer that has received minimal legislative oversight or public consultation in recipient countries.
The systems predominantly involve Chinese vendors and technology platforms, suggesting these deployments are part of broader technology export relationships rather than procurement from diverse international suppliers. This raises questions about interoperability, data sovereignty, and whether surveillance capabilities are bundled with other infrastructure projects under frameworks like the Belt and Road Initiative. Critically, the report's authors assert that surveillance measures are not 'necessary or proportionate' — invoking language from international human rights frameworks that require justification for rights limitations. However, no specific enforcement mechanisms or international accountability processes are identified in available reporting, indicating these systems operate in a governance vacuum where domestic checks are weak and international norms lack enforcement.
UK Regulator Demands Age Verification Strengthening as Enforcement Phase Begins
UK communications regulator Ofcom has formally requested Instagram, Snapchat, TikTok, YouTube, and Roblox to strengthen age verification systems for users under 13, according to BBC News. The regulator stated these platforms are not 'putting children's safety at the heart of their products' — unusually direct language indicating Ofcom may be preparing enforcement action under the Online Safety Act. This marks the transition from rule-making to active regulatory enforcement, as Ofcom moves beyond consultation to demanding concrete compliance measures. The timing is significant: the Online Safety Act's children's safety provisions came into force for the largest platforms in late 2025, meaning companies have had several months to implement required protections.
The demand for 'tougher' age checks suggests Ofcom has determined current verification methods are insufficient, though specific technical requirements are not detailed in available reporting. Age verification remains technically and politically contentious, with industry arguing that effective verification requires collecting additional personal data or implementing government identification checks that themselves raise privacy concerns. Australia recently implemented a social media age ban for under-16s, and Bloomberg reports teenagers are circumventing those restrictions using VPNs and by deceiving age verification systems. This demonstrates the enforcement challenges Ofcom will face even if it mandates stricter technical measures. The UK regulator has enforcement powers including substantial fines, but this is the first major test of whether the Online Safety Act's regulatory architecture can compel behavioural change from dominant platforms.
AI-Enabled Fraud Reaches Record Scale in UK as Industrialised Scams Proliferate
UK fraud cases reached a record 444,000 in 2025, driven substantially by criminals using AI technology to execute account takeover scams at what fraud prevention organisation Cifas characterises as 'industrialised' levels, according to The Guardian. The fraud prevention body, which maintains the UK's national fraud database, specifically identified AI tools as enabling large-scale deception through automated targeting of mobile, banking, and online shopping accounts. The characterisation of fraud as 'industrialised' suggests criminal operations have moved beyond individual scam attempts to systematic, automated exploitation of identity verification and authentication systems.
AI is reportedly being used to enhance multiple fraud vectors: generating convincing phishing content, automating credential stuffing attacks against accounts with reused passwords, creating synthetic identities that pass initial verification checks, and potentially defeating voice-based authentication through deepfake audio. Separately, The Guardian reports authors are being targeted by AI-powered accounts promising exposure and reviews, while another Guardian article details publishing scams using AI to automate literary fraud operations. The fraud ecosystem is clearly adapting AI tools faster than defensive measures are being implemented. Cifas has no regulatory authority — it is an industry membership organisation — meaning its findings carry weight for identifying trends but do not trigger direct enforcement action. The scale of reported fraud suggests existing authentication and verification systems designed for human-speed attacks are failing against AI-accelerated operations.
Signals & Trends
Regulatory Arbitrage Emerges Between Technology and Physical Infrastructure Governance
The UK's decision to tighten screening for water sector acquisitions while loosening it for commercial AI creates a two-tier approach to national security risk that may not reflect actual threat models. AI systems are increasingly embedded in critical infrastructure operations, meaning a 'commercial AI' investment could provide indirect access to sensitive systems without triggering screening. This policy gap suggests governments are still treating technology as a separate domain rather than recognising its integration into physical infrastructure. Other jurisdictions are likely watching to see if the UK's approach creates investment advantages without compromising security — or if it produces vulnerabilities that prompt policy reversal. The trend indicates that investment screening frameworks designed for industrial-era assets may not map cleanly onto dual-use technology capabilities.
Enforcement Capability Gap in International AI Governance Norms
African surveillance infrastructure deployments highlight a recurring pattern: international norms around AI governance and human rights exist, but enforcement mechanisms are largely absent when domestic rule of law is weak. The $2 billion investment proceeded without triggering international accountability processes, conditionality from development institutions, or coordination among donor governments that fund many recipient country budgets. This suggests that absent binding treaty obligations with enforcement mechanisms — which do not exist for AI surveillance — technology transfer will continue regardless of rights implications. Governance norms articulated in forums like the UN, OECD, and Council of Europe remain aspirational for countries outside these bodies' direct membership and jurisdiction. The trend suggests future AI governance debates need to focus less on consensus norm-building and more on concrete enforcement architecture with real consequences.
Age Verification Becoming Battleground Issue as Implementation Reality Hits
The UK, Australia, and other jurisdictions are moving from age verification proposals to enforcement demands, revealing fundamental tensions between child safety objectives and technical reality. Australian teenagers are already circumventing that country's age restrictions, demonstrating that verification mandates without addressing VPN use, device spoofing, and identity borrowing will fail. Ofcom's demand for stronger verification from major platforms suggests the regulator has concluded initial compliance efforts are performative rather than effective. However, effective age verification typically requires invasive identity checks that conflict with privacy norms and create new data risks. This collision between policy intent and implementation reality is forcing regulators to either accept weak verification theatre, mandate privacy-invasive identity systems, or acknowledge that age restrictions cannot be technically enforced at scale. The outcome will set precedents for what level of identity verification governments can demand from digital services — with implications far beyond child safety to questions of anonymity, privacy, and surveillance architecture embedded in everyday digital services.
Explore Other Categories
Read detailed analysis in other strategic domains