When AI becomes a weapon: safeguarding against AI-driven cyber threats

AI-driven cyber threats

Artificial intelligence has become a double-edged sword in cybersecurity. The same machine-powered AI assistants and agents that help defenders are increasingly being exploited by adversaries. Anthropic recently disclosed that its Claude model was manipulated to support fraud, extortion and even to help suspected North Korean operatives secure jobs inside US tech companies. AI now has the capacity to assist with nearly every stage of the attack lifecycle from reconnaissance and credential harvesting to intrusion, data theft and ransom negotiations. 

This marks a decisive shift as cybercriminals are no longer experimenting with AI at the margins; they are actively operationalising it, reshaping the trajectory and scale of their attacks. The UK’s National Cyber Security Centre has issued similar warnings, emphasising how AI is making cyber operations faster, more frequent and more effective. 

Recent high-profile incidents like the Jaguar Land Rover breaches highlight what’s at stake. AI is increasing both the speed and reach of cybercrime while tilting the balance of power between attackers and defenders. 

Across the criminal ecosystem, AI is being weaponised at speed, while legal and defensive frameworks struggle to keep pace. Security leaders need to adapt their own usages of AI technologies, without losing sight of the human expertise and oversight that remain essential.  

AI is turning against us

AI is altering the tempo of cybercrime at every level. At the entry tier, widely available tools allow low-skilled actors to automate phishing campaigns, reconnaissance tasks, and malware generation. This “democratisation effect” lowers the barrier to entry and increases the baseline threat for every organisation. 

At the more sophisticated end, established groups are building bespoke AI systems to sharpen their operations. Ransomware operators, for example can analyse open-source data to single out victims most likely to pay, while cryptocurrency thieves are leveraging AI-driven pattern recognition to pinpoint vulnerable wallets and maximise profits. 

Nation-state actors: AI on a strategic stage

For nation-states, AI is a strategic asset. With resources to build custom models trained on specific industries, AI allows governments to train specialised models to generate disinformation, run convincing digital personas and streamline the collection of sensitive intelligence. 

The geopolitical implications are significant. In its Global Risks 2024 report, the World Economic Forum flagged disinformation as AI’s most pressing short-term threat due to its ability to undermine democracy and fuel unrest. Real-world cases already exist: in 2023, a deepfake audio clip of London’s Mayor, Sadiq Khan, triggered disorder before Armistice Day. Recently, North Korean state- sponsored actors were linked to AI-generated fake IDs crafted for a phishing campaign. 

Organisations that tactfully combine AI with human expertise can turn the tide and gain a strategic edge. 

Such techniques complicate attribution. Language models can mask linguistic cues, while AI-generated infrastructure conceals technical fingerprints. Uneven global regulation, where some countries build defensive AI while others double down on offensive capability, only widens this imbalance. 

Human- machine teaming for defence

Defenders must adapt just as quickly as threat actors. The most resilient defensive approach is hybrid: letting AI handle scale and efficiency while humans provide strategic judgment, creativity and ethical oversight. 

This requires investment in both offensive and defensive AI, paired with human-led scenario planning, red teaming and governance. Success depends on setting realistic goals, planning careful rollouts and continuously iterating as both threats and technologies evolve. The balance between automation benefits and implementation complexity determines overall programme success.  

Closing the gaps through industry collaboration

No single enterprise can confront this challenge alone. Greater cross-industry collaboration is critical, supported by robust standards and regulatory frameworks. Sharing intelligence on AI-enabled attacks is vital, though competitive pressures and compliance barriers often discourage openness.

Professional bodies must take the lead in creating AI-specific security practices that balance innovation with risk mitigation. Certification schemes may help, but they must evolve quickly to avoid irrelevance in the face of rapid AI advances. 

Combatting AI cyber threats

AI has irreversibly changed the cybersecurity landscape. Criminal groups are innovating at speed, while nation-states deploy AI as part of wider intelligence and influence operations. Organisations that tactfully combine AI with human expertise can turn the tide and gain a strategic edge. 

The key is neither surrendering to over-automation nor dismissing AI as mere hype. This is an arms race where both attackers and defenders are evolving in real time. The organisations that will withstand the cyber threats will be those that build adaptive, resilient programs, collaborate widely and ensure that human expertise remains at the centre of their strategy. 

Giles Inkson, Director of Red Team & Adversary Simulation, NetSPI

Giles Inkson

Giles Inkson is Director of Red Team & Adversary Simulation at NetSPI. He leads global red team initiatives, partnering with Fortune 500 and globally respected companies to transform security vulnerabilities into opportunities for enhanced resilience. 

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE