Microsoft Warns of AI vs. AI Cyber Warfare as Threats Escalate

THURSDAY, OCTOBER 23, 2025

New 'Digital Defense Report 2025' reveals AI is driving 'autonomous malware' and sophisticated human manipulation; Thailand a top target

  • Microsoft's latest report warns that cybersecurity has become an "AI vs. AI" battleground, where AI is a "dual-use" tool accelerating both attack complexity and defense capabilities.
  • Cybercriminals are using generative AI to create "autonomous malware" that can self-modify its code to evade detection and automatically optimize its attacks on a victim's network.
  • AI is also being weaponized for sophisticated social engineering, including hyper-realistic deepfakes for impersonation scams and large-scale bot networks to manipulate human perception.
  • The defensive response involves using AI-driven security platforms to automatically detect, analyze, and respond to these automated threats, effectively fighting AI with AI.

 


New 'Digital Defense Report 2025' reveals AI is driving 'autonomous malware' and sophisticated human manipulation; Thailand a top target.

 

Generative AI technology is rapidly redefining the cybersecurity battleground, transforming it into an arena where defensive and offensive AI systems clash, according to Microsoft's latest Digital Defense Report 2025

 

The technology has become a dangerous 'dual-use' tool, accelerating both attack complexity and defence capability.

 

Microsoft's report indicates that cyber threats remain overwhelmingly focused on "financial gain", making every organisation and individual a potential target.

 

In Asia, Thailand is now ranked 11th in the Asia-Pacific region and 29th globally for countries affected by these attacks.

 

The arrival of sophisticated AI tools is not only amplifying existing threats but also creating complex new vulnerabilities.
 

 

The Dual Threat

1. Automated System Attacks

While basic defences like Multi-Factor Authentication (MFA) remain crucial—given over 7,000 account attacks occur every second—a far greater concern is the emergence of "autonomous malware."

 

Cybercriminals are integrating Generative AI into their malicious tools, allowing malware to:

 

Self-Modify Code: The AI enables malware to write new attack code on the fly, effectively evading traditional signature-based detection without waiting for human input.

Analyse and Optimise: AI allows the malware to analyse the environment it infiltrates and automatically select the most effective method to exploit discovered weaknesses.

 

Furthermore, AI is enhancing Advanced Persistent Threats (APT), helping attackers scan for weaknesses and move more stealthily within a victim’s network, drastically reducing the chances of human analysts detecting the infiltration.
 

 

Microsoft Warns of AI vs. AI Cyber Warfare as Threats Escalate


2. Manipulation and Social Engineering

AI is also being weaponised to attack human perception and trust:

AI Impersonation: Criminals use AI to create hyper-realistic Deepfakes of voices or images, impersonating colleagues or family members to execute highly credible scams aimed at stealing data or assets.

Disinformation and Hallucinations: Even benign AI tools can produce information that is "distorted from facts" (hallucinations). Users who fail to verify this data can inadvertently cause significant organisational damage.

AI-driven Manipulation: Attackers use large networks of AI bots to create artificial consensus or unified public opinion, designed to psychologically manipulate and persuade victims. Cross-referencing information is now a critical human skill.

 

 

 

The Response: AI-Driven Security

To counter AI-driven threats, the defensive strategy must also rely on advanced AI. Microsoft is championing "AI-driven Security Platforms" to automatically detect, analyse, and respond to these automated attacks.

 

Tools such as Microsoft Security Copilot exemplify this shift, applying Generative AI directly to security operations. This support is already proving effective, reducing the workload for Security Operations Center (SOC) analysts and cutting response times by up to 30%.

Microsoft argues that organisations must shift away from reacting to attacks and instead adopt a "Security-by-Design" mindset. This philosophy, central to Microsoft’s Secure Future Initiative (SFI), focuses on embedding security into products from the earliest design stages and sharing vulnerability insights to strengthen global defences.