Summary
Ransomware 3.0 is a new type of cyberattack where cybercriminals use agentic AI capabilities to plan and stage attacks. In this next stage of ransomware, bad actors also leverage multiple forms of extortion, called triple extortion, to up the ante against victims.
Ransomware 3.0 is a new generation of ransomware that uses AI to plan and stage ransom actions: and as we saw just this week, the same agentic capabilities that make AI valuable for productivity and defense can now be weaponized for offense at unprecedented speed and scale.
The 3.0 era also marks the shift by ransomware attackers to AI browsers, where the bad guys believe they’ll attract more victims. The new twist on ransomware also highlights the willingness of perpetrators to use multiple forms of extortion in concert (sometimes called triple extortion) to apply greater pressure on targets.
Ransomware Moves to the Browser
AI browsers have features like built-in chatbots and task-performing agents. Users are extolling the virtues of AI browsers. But the features that make AI browsers intelligent are already being tweaked by ransomware hackers. With “prompt injection,” attackers create code for applications built on large language models (LLMs), tricking the agentic AI into thinking the request comes from a trusted user.
Researchers at New York University’s Tandon School of Engineering created a proof-of-concept prompt injection piece of software recently, which was “discovered” by outside researchers and quickly dubbed “PromptLock.” (The researchers thought they’d discovered a true piece of ransomware created by crafty hackers before NYU spoke up and said, “Nope, we did it.”)
As part of the project, the NYU researchers simulated the four phases of ransomware attacks—mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes—with large language models (LLMs). The researchers tested their AI-driven simulation across personal computers, enterprise servers, and industrial control systems and found that complete ransomware attacks could be executed autonomously.
Anthropic’s AI-Orchestrated Espionage Report
In a real-world echo of those research findings, this week Anthropic reported uncovering the first large-scale AI-orchestrated cyber espionage campaign—an attack allegedly initiated by a state-sponsored group using AI agents to infiltrate major tech firms, banks, manufacturers, and government agencies. Unlike prior incidents where humans directed the attack flow, this operation used an autonomous AI system to perform up to 90% of the campaign on its own, from reconnaissance to data exfiltration. This underscores just how fast AI-driven cyber threats are evolving. For defenders, this marks a sobering moment—proof that the threat landscape isn’t just changing, it’s accelerating beyond traditional human response cycles.
Related reading: When App-layer Privacy Fails: The Case for Data-layer Governance
Not All AI Is Good AI
AI can be applied to almost every phase of a ransomware attack. Phishing driven by AI can be more nuanced, more contextually correct, and better targeted to “weak links” or high-value targets, known as spear phishing. Once access has been gained into an enterprise, AI can not only observe the activities of high-value targets but also seek ways to enable evasion—such as mimicking legitimate activity or looking for configuration errors that offer a foot in the door to even more assets.
In addition, AI’s much-lauded ability to write code is especially useful to cybercriminals. For example, they can deploy their malware with just enough random differentiation to trick traditional signature-based methods of detection. Such AI-derived polymorphic code can continuously improve or redirect ransomware as needed for evasion or opportunistic targeting.
Finally, AI allows criminals to automate much of the work of launching ransomware attacks—a crucial point when you consider that many cyber schemes increase their chances of success when they leverage easy scaling, especially at the phishing stage. The ability to automate is a force multiplier, allowing more operations to be kept going simultaneously.
Adding Layers of Extortion to Ransomware
Another feature of ransomware 3.0 is more aggressive, more targeted forms of coercion. When ransomware was new, the first wave of attackers was content to encrypt data and seek payment in return for the decryption key. In many cases, the encryption was of poor quality and could be broken. In addition, a rigorous system of backups could allow organizations to ignore demands and simply implement recovery.
The next stage of ransomware, or ransomware 2.0, saw criminals not just encrypting data but exfiltrating it and threatening to release it on the public web or sell it to one of many data brokers in the cybercrime ecosystem. Ransomware as a service was also a feature of ransomware 2.0. With the rise of blockchain technology, encryption methodology made a big leap. In this case, data backups might allow for recovery, but a large trove of data falling into criminal hands would result in huge reputational damage, not to mention regulatory fines and possible financial settlements for affected clients and partners.
Restoring data backups in the ransomware 3.0 era can seriously dent enterprise budgets. The biggest cost to enterprises suffering a cyberattack is the recovery of the up-to-date backups of critical data, as detailed in a new cyber resilience study from Pure Storage and Ponemon Institute. Respondents said other significant costs are recovering, repairing, or replacing affected systems and applications; detecting and containing the incident; and testing to ensure restored systems are functioning correctly.
The dangers posed by ransomware 3.0 are real. Criminals feeling emboldened by their mastery of AI may become more aggressive in applying pressure on targeted organizations.
Cyber Resilience Is More Critical Than Ever
Agentic AI hasn’t just rewritten the rules for productivity; it’s rewritten the risks for enterprise IT. Enterprises must now contend with automated reconnaissance, adaptive malware, and multi-stage extortion—and it’s scaling faster than any human team can respond. As the landscape evolves, enterprise security teams should:
- Acknowledge the agentic AI shift: Understand that the threat landscape now includes fully autonomous attack frameworks, capable of sustained operations without human oversight.
- Consider opting out of features that enable persistent memory, broad tool access, or long-term data retention, when not business-critical. Many agentic AI platforms, including Anthropic, allow enterprises to restrict agents’ access to sensitive data, disable long-term memory, and limit integration with external APIs or unvetted tools—all of which directly reduce risk if a jailbreak or prompt injection occurs.
- Strengthen AI-aware defenses: Invest in detection tools that can recognize anomalous agent behavior, privilege escalation, and rapid automation—not just traditional malware signatures.
- Harden systems against jailbreaks and misuse: Implement layered safeguards around AI access, including constraints on tool usage, granular identity and role controls, and constant validation of agent actions.
- Focus on speed and recovery: Rapid response and instant recovery capabilities are critical, given the pace at which autonomous agents can move. Immutable backups and real-time monitoring are now essentials, not options.
- Keep humans in the loop for oversight: Autonomous tools require hard governance boundaries—security teams should create oversight committees, conduct regular reviews, and ensure escalation paths for suspicious activity.
- Share intelligence and collaborate: Continuous information sharing with partners and industry bodies is vital to counter distributed, AI-powered campaigns before they spread or escalate.
Enterprises should treat this incident as a wake-up call. The era of agentic AI in cyber operations is now a reality, making modern resilience planning, rapid detection, and human oversight critical elements for survival and success.
Download “The State of Cyber Resilience” report.
FAQ

The State of Cyber Resilience
Learn how 620 US-based IT security practitioners are approaching their data storage and keeping it safe.
The State of Cyber Resilience
Learn the security strategies of 620 US-based IT security practitioners.






