Cyber threat actors are increasingly using artificial intelligence tools to improve the scale and complexity of their attacks, according to new research from the Google Threat Intelligence Group. The report shows that both state-backed and criminal groups are now integrating AI into almost every stage of a cyberattack, from reconnaissance to data exfiltration.

 

 

In earlier years, threat actors typically used AI to assist with basic tasks such as writing phishing messages or collecting public information about targets. Google’s latest analysis indicates that these activities have evolved. Attackers are embedding AI directly into malware, infrastructure, and command operations. The report calls this shift “a new operational phase of AI abuse.”

One of the key findings involves the rise of AI-enabled malware that relies on large language models while it executes. Google identified a malware family called PROMPTFLUX that connects to an AI model and rewrites its own code at regular intervals. This constant modification helps it avoid detection by traditional antivirus tools that depend on fixed signatures. Another example, named PROMPTSTEAL, uses an AI model to generate commands for stealing data instead of relying on prewritten scripts.

This represents a major escalation in how cyber operations function. Artificial intelligence is no longer being used simply to support attackers, as it has become an active component of their campaigns. Google’s researchers note that some of these tools are capable of generating new code or adapting their behavior in real time based on environmental feedback.

The underground market for AI-enabled cyber tools is also expanding. Google found advertisements on Russian and English-language forums offering AI-powered phishing kits, malware creation programs, and automated scanning tools. Many are promoted using professional marketing language that promises speed, efficiency, and scale. The accessibility of these services allows less skilled criminals to carry out complex operations that were previously limited to advanced hacking groups.

State-sponsored actors are another focus of the report. Google identified campaigns from North Korea, Iran, and the People’s Republic of China that rely heavily on AI systems. These groups are using generative AI across the full attack lifecycle, including reconnaissance, phishing, exploitation, and data theft.

For example, China-linked group APT41 has used Google’s Gemini model to write code in C++ and Go, create obfuscated command and control frameworks, and develop cloud-based attack infrastructure. An Iran-based actor known as APT42 reportedly used an AI model to turn natural language queries into database searches, allowing it to link phone numbers to individuals and monitor travel patterns. A North Korean group identified as UNC1069 used Gemini to create Spanish-language phishing attachments and to develop tools for stealing cryptocurrency.

The report also details how threat actors manipulate AI safeguards to obtain restricted information. Some attackers pose as researchers, students, or cybersecurity competition participants to make their requests appear harmless. By doing so, they bypass safety systems and receive technical instructions for creating phishing kits, malicious scripts, or web shells. Google notes that these tactics take advantage of weaknesses in how AI systems interpret intent.

The growth of AI-driven attacks presents serious challenges for defenders. Traditional detection methods based on known malware samples, static signatures, or predictable patterns are ineffective against code that changes constantly or relies on real-time prompts from external AI models. Google warns that this evolution makes it significantly harder for security teams to detect and respond to active threats.

At the same time, the wider availability of AI-enabled tools is lowering the barrier to entry for cybercrime. Sophisticated capabilities are no longer exclusive to advanced persistent threat groups. Subscription-based services and prebuilt AI scripts allow inexperienced criminals to launch credible attacks with minimal technical knowledge. This has increased both the frequency and variety of attacks targeting governments, businesses, and individuals.

Google recommends several defensive measures to address these developments. Organisations should enforce strong authentication, adopt multifactor verification, and apply the principle of least privilege across user accounts. They should patch known vulnerabilities quickly and strengthen monitoring tools to identify abnormal behavior. Google also advises security teams to expand their incident response plans to include AI-enabled threats and to integrate threat intelligence feeds that track AI-related activity in underground markets.

The report highlights that defenders must treat AI-enabled threats as an immediate risk rather than a future concern. The combination of generative AI and cyber operations allows attackers to automate reconnaissance, exploit vulnerabilities faster, and produce adaptive malware that changes in real time. This requires organisations to adopt proactive detection, continuous learning, and collaboration with trusted intelligence providers.

Although some AI-enabled malware identified by Google remains in the testing stage, the trend is clear. The use of artificial intelligence in cyberattacks is accelerating, and the gap between legitimate and malicious use is narrowing. The report calls on defenders to prioritise visibility into attacker workflows, train analysts to recognise AI-related indicators, and design resilient systems capable of withstanding adaptive threats.

Artificial intelligence is now an integral part of the global cyber threat landscape. From dynamic, self-modifying malware to automated phishing operations, attackers are reshaping how intrusions unfold. Security teams that build awareness of AI abuse, strengthen operational defences, and monitor the evolution of AI-enabled tools will be better prepared to respond to this fast-moving threat environment.

Incoming search terms:

Leave a Reply