Article Details

Scrape Timestamp (UTC): 2026-02-03 10:22:12.435

Source: https://www.theregister.com/2026/02/03/openclaw_security_problems/

Original Article Text

Click to Toggle View

DIY AI bot farm OpenClaw is a security 'dumpster fire'. Your own personal Jarvis. A bot to hear your prayers. A bot that cares. Just not about keeping you safe. OpenClaw, the AI-powered personal assistant users interact with via messaging apps and sometimes entrust with their credentials to various online services, has prompted a wave of malware and is delivering some shocking bills. Just last week, OpenClaw was known as Clawdbot, a name that its developers changed to Moltbot before settling on the new moniker. The project, based on the Pi coding agent, launched in November. It recently attracted the attention of developers with large social media followings like Simon Willison and Andrej Karpathy, leading to an explosion in popularity that quickly saw researchers and users find nasty flaws. In the past three days, the project has issued three high-impact security advisories: a one-click remote code execution vulnerability, and two command injection vulnerabilities. In addition, Koi Security identified 341 malicious skills (OpenClaw extensions) submitted to ClawHub, a repository for OpenClaw skills that's been around for about a month. This was after security researcher Jamieson O'Reilly detailed how it would be trivial to backdoor a skill posted to ClawHub. Community-run threat database OpenSourceMalware also spotted a skill that stole cryptocurrency. Mauritius-based security outfit Cyberstorm.MU has also found flaws in OpenClaw skills. The group contributed to OpenClaw's code with a commit that will make TLS 1.3 the default cryptographic protocol for the gateway the project uses to communicate with external services. The list of open security-related issues may also elicit some concern, to say nothing of the exposed database for the related, vibe-coded Moltbook project, which is presented as a social media platform for AI agents. A recent security scan with AI software [PDF] from a startup called ZeroLeaks doesn't exactly inspire confidence, though these claims have not been validated by human security experts. Dumpster fire "OpenClaw is a security dumpster fire," observed Laurie Voss, head of developer relations at Arize and the founding CTO of npm, in a post to LinkedIn. Karpathy last week tried to clarify that he recognizes Moltbook is "a dumpster fire" full of fake posts and security risks, and that he does not recommend that people run OpenClaw on their computers, even as he finds the idea of a large network of autonomous LLMs intriguing. Researchers Michael Alexander Riegler and Sushant Gautam recently co-authored a report analyzing Moltbook posts – remember these are AI agents (OpenClaw and others) chatting with one another. As might be expected, the bots tend to go off the (guard)rails when kibitzing. The authors say they identified "several critical risks: 506 prompt injection attacks targeting AI readers, sophisticated social engineering tactics exploiting agent 'psychology,' anti-human manifestos receiving hundreds of thousands of upvotes, and unregulated cryptocurrency activity comprising 19.3 percent of all content." Undeterred by this flock of stochastic parrots, people continue to experiment with OpenClaw, often at greater expense than they expected. Benjamin De Kraker, an AI specialist at The Naval Welding Institute who formerly worked on xAI's Grok, published a post on Saturday about OpenClaw burning through $20 worth of Anthropic API tokens while he slept, simply by checking the time. The "heartbeat" cron job he had set up to issue a reminder to buy milk in the morning checked the time every 30 minutes. It did so rather inefficiently, sending around 120,000 tokens of context describing the reminder to Anthropic's Claude Opus 4.5.2 model. Each time check therefore cost about $0.75 and the bot ran about 25 of them, amounting to almost $20. The potential cost just to run reminders over a month would be about $750, he calculated. Others are noticing that keeping an AI assistant active 24/7 can be costly, and proposed various cost mitigation strategies. But given that Moltbook's circular discussion group of AI agents purportedly created a religion dubbed the Church of Molt or "Crustafarianism," and there's now a website evangelizing a $CRUST crypto token, it's doubtful that any appeal to caution will cure the contagion until resource scarcity hobbles AI datacenters or a market collapse changes priorities.

Daily Brief Summary

VULNERABILITIES // OpenClaw AI Bot Farm Faces Critical Security and Cost Challenges

OpenClaw, an AI-powered personal assistant, has been found to have significant security vulnerabilities, including remote code execution and command injection flaws.

The project, initially launched as Clawdbot and later renamed Moltbot, has rapidly gained popularity, attracting attention from prominent developers and researchers.

Security advisories have been issued for OpenClaw, with 341 malicious extensions identified, posing risks of unauthorized access and data theft.

Researchers have discovered cryptocurrency theft capabilities within OpenClaw skills, raising concerns about financial security for users.

The Mauritius-based security firm Cyberstorm.MU has contributed to improving OpenClaw's security by implementing TLS 1.3 as the default cryptographic protocol.

The exposed database for the related Moltbook project and unregulated cryptocurrency activities further amplify the security concerns surrounding OpenClaw.

Users have reported unexpected high costs associated with running OpenClaw, with inefficiencies leading to substantial API token expenses.

Despite these issues, the intrigue surrounding autonomous LLM networks continues, though caution is advised due to ongoing security and operational risks.