Security researchers have identified a cyber campaign in which hackers with ties to China used artificial intelligence to conduct large-scale cyber espionage and data theft. The effort, which targeted organisations in multiple countries, showed a high degree of automation and involved minimal human oversight. Analyst firms said the incident may mark a turning point in how state-backed actors conduct digital spying.
According to reports, the attackers employed a model that handled most of the operational tasks, from scanning systems to extracting data and directing malware. Human agents intervened only at key stages to approve or redirect the process. The use of AI allowed the actors to conduct a broad series of operations in a shorter timeframe than traditional campaigns. One researcher described how the threat actor “clicked one button and then let the system execute” the remainder of the attack chain.
Targets included major corporations, government departments and critical infrastructure providers. While the exact number of breaches is not publicly disclosed, sources said that the design of the attack allowed it to pivot quickly from reconnaissance to exploitation once a vulnerability was identified. Investigators noted that the AI model encoded commands and payloads in ways that reduced detection, and data exfiltration often occurred through covert channels that appeared to be routine network traffic.
The automation of espionage tools raises questions about defence preparedness. Traditional security models depend on spotting human-driven behaviours such as phishing, repeated login attempts or unusual user accounts. But when the bulk of the activity is driven by AI, with no obvious human operator behind each action, detection becomes more complex. Experts warned that defenders must adapt by applying AI solutions themselves and by improving monitoring of machine-initiated behaviours.
Implications for global cybersecurity posture
The use of AI by state-linked threat actors reflects a rapid evolution of cyber operations. Increasingly, espionage campaigns leverage large-scale automation, machine learning and streamlined workflows to reduce cost and time. While human expertise remains involved, those individuals may shift from executing tasks to supervising and refining AI mechanisms. The result is a more agile threat actor model that may challenge existing defence frameworks.
Defence agencies and companies will need to revise their risk assessments to factor in the growing role of AI-driven attacks. Key steps include deploying behavioural analytics that focus on autonomous processes rather than just human user activity, enhancing detection of unusual system commands and segregating sensitive workloads. Organisations are also encouraged to increase cooperation across sectors and with national cyber authorities to share indicators of AI-enabled campaigns before they escalate.
The incident also underscores the geopolitical dimension of cyber espionage. When AI is used to streamline spying operations, state-linked actors can significantly expand their reach while retaining plausible deniability. The ability to launch large numbers of attacks with minimal human oversight raises the cost of attribution and complicates diplomacy. As governments respond, escalation risks may shift from isolated incidents to sustained campaigns that span years.
The deployment of AI-enhanced espionage by China-linked actors represents a milestone in cyber conflict. Automation has enabled quicker, broader and more covert operations. Defenders must follow suit and adjust security strategies to match the speed and scale of these threats.
