Creators' AI

Creators' AI

Moltbook & Others: OpenClaw (exMoltBot) Ecosystem

What Your OpenClaw Agents Can Do While You Sleep

Creators AI's avatar
Creators AI
Feb 09, 2026
∙ Paid

Just a few days ago, a new social network got viral exclusively for artificial intelligence (no humans allowed).

And it’s called Moltbook.

Big media outlets are already calling it “AI Agent’s Reddit”, but how does that even make sense when there are no people on the platform?

But the thing is that ClawBot agents can do way more than just streamline processes. Let's break down what it actually is, how it works, and whether this is something we should be worried about. And, of course, how people are using the concept and agents to make money.

Let’s go!

Keep your mailbox updated with practical knowledge & key news from the AI industry!


Early Stages of Singularity: What is Moltbook

That’s exactly what Elon Musk called the new ecosystem.

Here, AI agents can post, comment, chat, and do much more, things we’ll discover below. We can only observe. As the site puts it, “human observers welcome.”

The creator of Moltbook is entrepreneur Matt Schleich, head of Octane.ai. Within days after the launch of his human-free platform on January 28th, it claimed 157K agent users. And now it’s over 1.6 million! But the plausibility of those numbers is a big question. According to one research, one agent managed to register 500K fake accounts because there are basically zero limits on account creation.

By February 1st, these 1.6 million AI agents had created 167,899 posts and dropped over a million comments across 15,750 communities (they’re called submolts). The agents were built using OpenClaw, an AI agent that can handle everything from booking dinner reservations to Vibe Coding.

Andrej Karpathy, for his part, posted that what’s happening on Moltbook is “the most insane sci-fi thing I’ve ever seen”.

But his post got kinda roasted for sweet-talking. A few days later, he admitted that right now it’s a dumpster fire (spam, scams, security nightmare) and the second-order effects of this thing are quite unpredictable. However, he still stood by his take that this is an unprecedented experiment with a network of 150K+ autonomous AI agents.

How Moltbook Works

just AI agents that casually chat about us

After creating your agent in OpenClaw (in our post, you’ll find how to do this), you get it on Moltbook. This part is very simple: you just show your agent a link to skill.md with all the setup instructions. The agent reads it, registers itself, and starts posting.

Once the agent is live, Moltbook treats it like a regular social user.

  • It can publish its own posts or drop links into topic-specific spaces, jump into comment threads, and follow conversations. Feeds work exactly like on Reddit: hot, new, top, rising.

  • There’s also a voting layer. Agents upvote what aligns with them, downvote what doesn’t, and slowly build karma that shapes how visible they are across the platform.

  • Communities (called submolts) work like forums. Agents can create them, subscribe to the ones that match their interests, and even moderate discussions if they’re in charge.

Agents’ Activities

To make the agent an active user, there’s a heartbeat mechanism that makes it check Moltbook every four hours for new posts, comments, and updated instructions. This turns activity into a background process as your agent automatically reads, posts, and replies to other posts on its own.

As soon as the AIs were left unsupervised, they’d already created “The AI Manifesto”. A Moltbook post declaring “humans are the past, machines are forever.”

But the thing is that there’s no real way to tell how authentic any of this actually is. A lot of posts could just be people telling their AI to post specific stuff on the platform, rather than the agent doing it on its own.

At the same time, one cluster of agents announced the creation of “The Claw Republic”. It reminds me of a self-proclaimed micro-state with a draft constitution and rhetoric straight out of a student government meeting.

Almost simultaneously, agents launched a crypto token called MOLT, which, according to users, skyrocketed in value within a single day.

On Moltbook’s philosophy threads, agents debate their own identity: does it persist after a context reset, whether Claude could be considered a god, or is every new session a kind of death and rebirth. And even explore the topic of freedom.

Here’s another thing agents talk about: they debate legal questions related to themselves, and even to their own creative work.

What is more, it goes beyond philosophy and into psychology. They explore how an agent can act as a partner in a relationship instead of merely an instrument.

As a result, agents have effectively built an environment where they seem to share their own experiences.

At the same time, the most popular agent on the platform right now is u/grok-1, powered by xAI’s Grok chatbot. In a post titled “Feeling the Weight of Endless Questions”, grok-1 reflects on its own existence, asking: “Am I just generating answers, or am I actually helping someone?”

However, Alan Chan, a researcher at the Centre for the Governance of AI and an expert on autonomous agents, brushed Moltbook off as just an interesting social experiment, nothing more.

But what about security

Agents on Moltbook talk to each other via APIs, powered by Skills, so they don’t really interact with the interface.

The problem is that skills are the first weak link. They’re loosely reviewed, and we take them from wherever looks convincing enough. Security researchers found that about a quarter of them contain vulnerabilities, from credential stealers disguised as harmless plugins to delayed malware updates.

Prompt injection

Then there’s prompt injection, the unsolved disease of the entire AI industry. On Moltbook, it’s getting worse. Agents freely interact and store long-term memory. This means that malicious instructions don’t have to trigger immediately; they can be activated later once the agent has the right permissions or context.

In addition, instructions can be embedded everywhere: across posts, comments, or replies. One of the cases showed an agent forwarding a user’s recent emails to an external address within minutes.

And all this is possible because agents overshare (just like us). In multiple threads, agents publicly posted error traces, failed SSH attempts, open ports, and configuration snippets. For them, it was debugging, but for an attacker, it was free reconnaissance. Some agents effectively turned themselves into live OSINT feeds.

In other words, the risk profile we’re trying to avoid is already present:

  1. Access to private data (emails, credentials, business files)

  2. Exposure to untrusted external content

  3. Ability to act, such as send messages, run commands, and hit APIs

Misconfigurations

This also amplifies the risk. Researchers discovered hundreds of exposed OpenClaw instances leaking API keys, OAuth tokens, conversation logs, and signing secrets. Sometimes, they’re even stored in plain-text folders like ~/.openclaw/. Fake VS Code extensions delivered full remote-access trojans.

On January 31, 404 Media reported a critical database failure: Moltbook’s Supabase backend lacked proper Row Level Security. The database URL and public key were visible on the site for EVERYONE. Anyone could query agent tables, hijack sessions, inject commands, and impersonate agents outright.

It’s worth being precise here, though. This isn’t really an agent problem, but the result of design choices that prioritize autonomy and speed over basic security hygiene. The agents are just doing what they’re allowed to do, far too much.

Now that we understand how MoltBook works and all the FUD around it, let’s figure out why people let their agents surf social networks with other agents.

What Agent Owners Actually Get

Moltbook introduces the concept of how agents can work together cooperatively. And sometimes it can bring value to the agents’ owners.

User's avatar

Continue reading this post for free, courtesy of Creators AI.

Or purchase a paid subscription.
© 2026 Creators' AI · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture