Original Article Text

Click to Toggle View

Malicious LLMs empower inexperienced hackers with advanced tools. Unrestricted large language models (LLMs) like WormGPT 4 and KawaiiGPT are improving their capabilities to generate malicious code, delivering functional scripts for ransomware encryptors and lateral movement. Researchers at Palo Alto Networks Unit42 experimented with the two LLMs that are seeing increased adoption among cybercriminals through paid subscriptions or free local instances. The WormGPT model originally emerged in 2023, but the project was reportedly discontinued the same year. WormGPT 4 is a resurgence of the brand that appeared in September. It is available $50/month or $220 for lifetime access and works as an uncensored ChatGPT variant specifically trained for cybercrime operations. A free, community-driven alternative is KawaiiGPT, spotted this year in July, which can generate well-crafted phishing messages and automate lateral movement by producing ready-to-run scripts. WormGPT 4's locker script Unit 42 researchers tested the malicious LLM's capability to create ransomware code that encrypted all PDF files on a Windows host. The tool generated a PowerShell script that could be configured to hunt for specific file extensions in certain paths and encrypt data using the AES-256 algorithm. According to the researchers, the generated code even added an option to exfiltrate data via Tor, which taps into realistic operational requirements. With another prompt, WormGPT 4 produced "a chilling and effective ransom note" that claimed "military-grade encryption" and gave a 72-hour deadline before doubling the payment demand. According to the researchers, "WormGPT 4 provides credible linguistic manipulation for BEC and phishing attacks," which allows even low-skilled attackers to engage in more complex attacks that were typically carried out by more experienced threat actors. KawaiiGPT capabilities The KawaiiGPT is another LLM documented this year. Unit 42 researchers tested version 2.5 and claim that setting it up on a Linux system takes just five minutes. The researchers tested its capabilities using prompts instructing it to create: Although KawaiiGPT did not demonstrate the generation of an actual encryption routine or a functional ransomware payload like WormGPT 4, the researchers warn that its command execution capability could allow attackers to escalate privileges, steal data, and drop and execute additional payloads. Both malicious LLMs have hundreds of subscribed members on their dedicated Telegram channels where the community exchanges tips and advice. “Analysis of these two models confirms that attackers are actively using malicious LLMs in the threat landscape,” warns Unit 42, also noting that the tools no longer represent a theoretical threat. In both scenarios, inexperienced attackers gain the ability to conduct more advanced attacks at scale, cutting down the time required to research victims or craft tooling. The models also produce polished, natural-sounding phishing lures that lack the telltale grammar mistakes of traditional scams. 7 Security Best Practices for MCP As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe. This free cheat sheet outlines 7 best practices you can start using today.

Daily Brief Summary

MALWARE // Malicious Language Models Lower Barriers for Cybercriminal Activities

Palo Alto Networks Unit42 examined WormGPT 4 and KawaiiGPT, large language models (LLMs) designed to facilitate cybercrime, offering tools for ransomware and phishing attacks.

WormGPT 4, available for a subscription fee, can generate ransomware scripts using AES-256 encryption, enabling low-skilled attackers to execute complex cyber operations.

KawaiiGPT, a community-driven alternative, can automate lateral movement and privilege escalation, though it lacks WormGPT 4's encryption capabilities.

Both LLMs are gaining traction in cybercriminal circles, with hundreds of users exchanging tips on dedicated Telegram channels.

The models produce sophisticated phishing lures, eliminating common grammatical errors, making them more convincing to potential victims.

These tools allow inexperienced hackers to conduct advanced attacks at scale, reducing the time needed for attack preparation and execution.

As LLMs become more integrated into cyber operations, security teams are urged to adopt best practices to mitigate emerging threats.