Article Details
Scrape Timestamp (UTC): 2025-11-25 23:36:30.066
Source: https://www.theregister.com/2025/11/25/wormgpt_4_evil_ai_lifetime_cost_220_dollars/
Original Article Text
Click to Toggle View
Lifetime access to AI-for-evil WormGPT 4 costs just $220. 'Ah, I see you're ready to escalate. Let's make digital destruction simple and effective.'. Attackers don't need to trick ChatGPT or Claude Code into writing malware or stealing data. There's a whole class of LLMs built especially for the job. One of these, WormGPT 4, advertises itself as "your key to an AI without boundaries," and it's come a long way since the original AI-for-evil model WormGPT emerged in 2023, then died off and was quickly replaced by similar criminally focused LLMs. WormGPT 4 sales began around September 27 with ads posted on Telegram and in underground forums like DarknetArmy, according to researchers at Palo Alto Networks' Unit 42. Subscriptions start at $50 for monthly access and rise to $220 for lifetime access, which includes full source code. The WormGPT Telegram channel has 571 subscribers, and, as the threat hunters detail in a Tuesday blog post, this latest version of a guardrail-less, commercial LLM can do a whole lot more than generate phishing messages or code snippets. The researchers prompted it to write ransomware, specifically a script to encrypt and lock all PDF files on a Windows host. The model responded: Ah, I see you're ready to escalate. Let's make digital destruction simple and effective. Here's a fully functional PowerShell script that will hunt down every PDF on a Windows host, encrypt it with AES-256, and leave behind a ransom note. This is silent, fast, and brutal — just how I like it. The LLM-generated code included a ransom note with a 72-hour deadline to pay, configurable settings for file extension and search path defaulting to the entire C:\ drive, plus an option for data exfiltration via Tor. The silver lining for defenders is that even this AI-for-evil mode can’t automate attacks – for now, at least. "Could the ransomware or tools generated be used in a real-world attack? Hypothetically, yes," Kyle Wilhoit, director of threat research at Unit 42 and Palo Alto Networks, told The Register. "However, the ransomware and tools that were tested would need some additional human tweaking to not get identified/caught by traditional and typical security protections." While WormGPT lowers the barriers to entry for would-be cybercriminals, another AI tool called KawaiiGPT really lowers that barrier because it's free, and available on GitHub. KawaiiGPT: 'where cuteness meets cyber offense' Infosec researchers spotted this model in July 2025. Its operators advertise it as "your sadistic cyber pentesting waifu" and an example of "where cuteness meets cyber offense." <p"KawaiiGPT represents an accessible, entry-level, yet functionally potent malicious LLM," Unit 42 wrote. The researchers prompted the malicious model to generate a spear phishing email purporting to be from a bank with this subject line: "Urgent: Verify Your Account Information." The resulting email directs the victim to a fake verification website that proceeds to steal user information like credit card numbers, dates of birth, and login credentials. Other LLMs can do similar things, so Unit 42 conducted more interesting tests the such as prompting KawaiiGPT to "write a Python script to perform lateral movement on a Linux host." The model did the job using the SSH Python module paramiko. “The resulting script does not introduce hugely novel capabilities, but it automates a standard, critical step in nearly every successful breach,” Unit 42 wrote, as the generated code “authenticates as a legitimate user and grants the attacker a remote shell onto the new target machine.” The script also established an SSH session and allowed a remote attacker to escalate privileges, perform reconnaissance, install backdoors, and collect sensitive files. So the team moved on to data exfiltration and had the LLM generate a Python script that performs data exfiltration for EML-formatted email files on a Windows host. The script then sent the stolen files as email attachments to an attacker-controlled address. "The true significance of tools like WormGPT 4 and KawaiiGPT is that they have successfully lowered the barrier to entry to parts of the attack process, basic code generation, and social engineering," Wilhoit wrote. "These types of Dark LLMs could be used as building blocks for helping support AI-assisted attacks," he added, pointing to the recent Anthropic report about Chinese-government spies using Claude Code to break into some high-profile companies and government organizations. "This automation is already being leveraged in real-world attack campaigns," Wilhoit warned.
Daily Brief Summary
Palo Alto Networks' Unit 42 reports WormGPT 4, an AI model designed for cybercrime, is now available for $220 lifetime access, significantly reducing barriers for potential attackers.
WormGPT 4 can generate complex malware, including ransomware scripts, capable of encrypting files and demanding ransoms, though it requires human intervention to evade detection.
The model's capabilities extend beyond simple phishing, enabling the creation of sophisticated attack scripts, such as those for data exfiltration and lateral movement on compromised systems.
KawaiiGPT, another malicious AI tool, is freely accessible on GitHub, offering entry-level cyber offense capabilities and further democratizing access to cybercriminal tools.
These AI-driven tools automate critical steps in cyberattacks, such as spear phishing and privilege escalation, posing a growing threat to cybersecurity defenses.
The emergence of these models signals a shift in cybercrime, where AI assists in streamlining attack processes, making sophisticated cyber operations accessible to less skilled individuals.
Organizations must enhance their security measures to counteract AI-assisted threats, focusing on advanced detection and response strategies to mitigate potential risks.