Moltbot (previously known as Clawdbot) has recently become one of the fastest growing open source AI tools.
But the viral AI assistant survived a chaotic week in the early stages. It went through a trademark dispute, a security crisis, and a wave of online scams to emerge as Moltbot.
The chatbot was created by an Austrian developer, Pete Steinberger, who marketed the tool as an artificial intelligence assistant that “actually does things.”
The feature that makes it interesting is that it can perform tasks on a user’s computer and applications. For example, managing calendars, sending messages or checking in for flights, mainly accessing applications such as WhatsApp and Discord.
This notable feature caused its explosive growth and made it popular among AI enthusiasts. However, due to its original name, “Clawdbot”, Anthropic (Claude’s creators) faced a legal challenge.
This forced the developers to rebrand with the name “Moltbot” (a reference to a lobster that sheds its shell).
Crypto scammers took abandoned social media usernames and set up fake domains and tokens in Steinberger’s name.
This case illustrates the underlying conflict of the tool: its great autonomy is also a source of danger.
Running it on the local machine is a privacy benefit, but the risk of giving an AI system the ability to execute commands is considerable.
However, despite the tumultuous start, Moltbot is the leading edge of what’s possible with AI.
It showcases developers’ growing vision for assistants that are proactive, integrated and helpful, rather than simply chatty. But at the same time, it raises security concerns.
For now, it’s a product for the tech-savvy, but its future looks like the frenetic, chaotic beginning of a new paradigm for personal computing.




