Autonomous malware
A new type of malware, described by George Kurtz, that uses prompts to interact with an LLM and can operate autonomously on a compromised system without needing to 'phone home' to a controller, making it harder to detect.
First Mentioned
1/26/2026, 2:55:17 AM
Last Updated
1/26/2026, 2:56:40 AM
Research Retrieved
1/26/2026, 2:56:40 AM
Summary
Autonomous malware represents a significant challenge in cybersecurity, particularly in the era of artificial intelligence. AI acts as a double-edged sword, capable of creating sophisticated threats like autonomous malware while also serving as a crucial defense mechanism. This type of malware can exploit vulnerabilities, including zero-day exploits, which are security flaws unknown to software developers and unaddressed by patches. The increasing sophistication of these threats is highlighted by cybersecurity experts, who identify major state-sponsored hacking groups from nations like Russia, China, and North Korea as key players in the escalating cyber conflict. Trends such as the rise of remote work have also introduced new vulnerabilities for corporations, further complicating the cybersecurity landscape.
Referenced in 1 Document
Research Data
Extracted Attributes
Wikipedia
View on WikipediaZero-day vulnerability
A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it. Until the vulnerability is remedied, threat actors can exploit it in a zero-day exploit, or zero-day attack. The term "zero-day" originally referred to the number of days since a new piece of software was released to the public, so "zero-day software" was obtained by hacking into a developer's computer before release. Eventually the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them. Vendors who discover the vulnerability may create patches or advise workarounds to mitigate it, though users need to deploy that mitigation to eliminate the vulnerability in their systems. Zero-day attacks are severe threats.
Web Search Results
- What Is Autonomous Malware? - Lazarus Alliance, Inc.
A Lazarus Alliance logo. Lazarus Alliance, Inc. # What Is Autonomous Malware? Glowing Neon malware sign on a digital projection background. We’re reaching the end of 2025, and looking ahead to 2026, most experts are discussing the latest threats that will shape the year ahead. This year, we’re seeing a new, but not unexpected, shift to autonomous threats driven by state-sponsored actors and AI. With that in mind, a new generation of threats, broadly known as autonomous malware, is beginning to reshape how organizations think about cyber risk, detection, and response. These threats don’t behave like the malware that defenders have spent decades learning to identify, and that’s got experts preparing for the new threat landscape. [...] While there have been several examples of malware that’s pushing the line of what is “autonomous,” true autonomous malware will share some primary characteristics. In practical terms, autonomous malware is defined by three key capabilities. ### Adaptive Decision-Making Autonomous malware chooses its next action based on context rather than a preset algorithm. If it detects a strong EDR presence, for example, it may select a stealthier persistence mechanism. If it senses a sandbox, it may delay execution or adopt a different activity until it sees an opening into production systems. This adaptiveness mirrors how AI agents operate. The best action depends on the environment, and the malware continuously evaluates it. ### C2-Independent Operation [...] Instead of treating the infected environment as a fixed script to step through, autonomous malware treats it as a dynamic environment. It can test multiple escalation paths and choose the one with the lowest likelihood of detection. Some strains leverage reinforcement-learning models for decision-making. Others use AI-generated code mutators to evade detection in real time. And attackers can deploy these tools in a decentralized manner, allowing malware agents to operate semi-independently with high-level goals rather than step-by-step instructions. ### Examples of Autonomous Malware
- Scalable architecture for autonomous malware detection and ...
This paper describes the design and implementation of an autonomous system for malware detection and defence in software-defined networks, which is scalable on demand. The proposed solution tackles the emerging challenge of identifying and defending against yet-evolving cyber attacks within complex networking architectures by leveraging the programmability capabilities of Software-defined networks and exploiting collaborative and privacy-preserving characteristics of federated learning. The design is centred around detection methods of an attack and a method of preventing an attack from occurring in the first place, all in real time. It consists of a strong system architecture and a strict algorithm process. We propose an architecture that combines the centralised traffic orchestrating [...] This paper proposes a scalable and autonomous malware detection and defence architecture in software-defined networks (SDNs) that employs federated learning (FL). This architecture combines SDN’s centralized management of potentially significant data streams with FL’s decentralized, privacy-preserving learning capabilities in a distributed manner adaptable to varying time and space constraints. This enables a flexible, adaptive design and prevention approach in large-scale, heterogeneous networks. Using balanced datasets, we observed detection rates of up to 96% for controlled DDoS and Botnet attacks. However, in more realistic simulations that utilized diverse, real-world imbalanced datasets (such as CICIDS 2017 and UNSW-NB15) and complex scenarios like data exfiltration, the performance [...] Reprints and permissions ## About this article ### Cite this article Ranpara, R., Patel, S.K., Kumar, O.P. et al. Scalable architecture for autonomous malware detection and defense in software-defined networks using federated learning approaches. Sci Rep 15, 30190 (2025). Download citation Received: Accepted: Published: Version of record: DOI: ### Share this article Anyone you share the following link with will be able to read this content: Provided by the Springer Nature SharedIt content-sharing initiative ### Keywords
- Autonomous Agentic AI-Enabled Deepfake Social Engineering ...
There are over 1 billion individual malware programs in existence and over 560,000 brand new malware programs are detected every day. It’s a rarity that any anti-malware scan detects any new malware program. How do I know? Because ransomware is everywhere and we all have anti-malware scanners running on our devices. From just numbers and pure persistence alone, the malware problem is pretty bad. It didn’t need to change. What we have is working quite well. But as technology changes significantly, so too does malware. It's Going to Get WorseThe game changer is self-driven, cooperating, AI-enabled autonomous agentic malware. That’s a mouthful. Let me explain more. [...] If it needs to go from device A to device Z to obtain its objective, it will figure out the most efficient way to do this. Don’t laugh. This capability has long existed and automated long ago. There are even free open-source versions, like BloodHound, if you want to use and experiment. Basically, they survey and inventory all the devices from point A and Z, looking at found vulnerabilities, group memberships, permission structures, etc., and determine the quickest way to move from A to Z. The best ones automate this process. Certainly, malicious agentic AI will do this. This all leads to autonomous AI agentic malware programs that do the work of breaking in online and stealing things (e.g., money, value, information, etc.) better and faster than in the days of old. [...] Note: If interested, read Mustafa Suleyman’s book, “The Coming Wave.” A who’s who of intellectual geniuses recommend this book. It certainly updated my perception of what is coming forevermore. And for sure…bad people are going to use autonomous agentic AI malware to rob and injure us. What Does Malicious Agentic AI Look Like?Almost every one of today’s malware programs is a single program or executable. You don’t have multiple scripts or executables working in concert to accomplish a common goal. That’s going to change. You are going to see malware that is composed of several different cooperating agentic AI components:
- How to Fight AI Malware | IBM
Security Artificial Intelligence Cybersecurity # Forget the fearmongering. To fight AI-generated malware, focus on cybersecurity fundamentals. ## Author Matthew Kosinski Staff Editor IBM Think Last summer, cybersecurity researchers at HYAS released the proof-of-concept for EyeSpy, a fully autonomous, AI-powered strain of malware that, they said, can reason, strategize and execute cyberattacks all on its own.1 This experiment, they warned, was a glimpse of the new era of devastating, undetectable cyberthreats that artificial intelligence would soon unleash. Or maybe not. [...] ## Detecting and preventing AI-powered attacks AI has not fundamentally changed the cybersecurity battleground. Instead, it has helped attackers streamline things they were already doing. That means the best line of defense against AI-powered attacks is for organizations to stick with the fundamentals. “If we are talking about AI being used to conduct attacks, the risk and response does not change for defenders,” says Ben Shipley, Strategic Threat Analyst with IBM X-Force Threat Intelligence. “Malware written by AI or by a human is still going to behave like malware. Ransomware written by AI does not have any more significant of an impact on a victim than ransomware written by a human." [...] Some worry that AI might lower the barrier to entry in the malware market, enabling more cybercriminals to write malicious programs regardless of skill level. Or, worse, AI technologies might help threat actors develop brand-new malware that can bypass common defenses and wreak untold havoc. Some researchers have tried to illustrate the dangers that AI-generated cyberthreats might pose by experimenting with different ways to incorporate AI into malware:
- AI Malware: Hype vs. Reality - Recorded Future
That said, the direction AI adoption is taking is clear. The progression from straightforward AI-generated content to AI-invoking malware and red team orchestration frameworks is a sign that more capable and autonomous operations are on the horizon. The contested Anthropic disruption is one of the first examples of this and serves as an early warning sign for defenders. ### All roads lead to AI orchestration [...] ## Hype vs. reality - what’s true, what’s exaggerated With a few years of “AI malware” headlines behind us, some patterns are clear. Most public activity sits well below the fully autonomous, Hollywood-style threats often implied by marketing. ### Where the activity really is (AIM3 Levels 1–3) Mapped to AIM3, the picture is clear: the vast majority of examples sit at Levels 1–3 (Experimenting through Optimizing), with a single contested Level 4 case and no verified Level 5 activity. Families like PromptLock, PROMPTFLUX, and MalTerminal look more like PoC exercises rather than in-the-wild malware. The Anthropic case is the single and highly contested example of Level 4 (Transforming) maturity, but even this initial example was not fully autonomous. [...] ## Introduction Generative AI (GenAI) and large language models (LLMs) are being rapidly integrated into all aspects of our society, from communication to cybersecurity. Enterprises and vendors are already using GenAI and LLMs to augment their defenses. Attackers are also adopting LLMs, primarily as a force multiplier rather than the one-click super malware often implied in article headlines. From phishing lures to code generation and basic orchestration, GenAI is lowering the skill barrier and speeding up familiar workflows, not unleashing a brand new class of unstoppable, fully autonomous malware.