Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.

In late January, the open-source project Clawdbot rapidly spread within the developer community, becoming one of the fastest-growing projects on GitHub within weeks. Clawdbot was developed by Austrian programmer Peter Stanberg. It is a locally deployable autonomous AI Agent that can receive human commands via chat interfaces like Telegram and automatically perform tasks such as schedule management, file reading, and email sending.
Thanks to its 24/7 continuous operation capability, Clawdbot was humorously called the “Ox and Horse Agent” by the community. Although Clawdbot was later renamed Moltbot due to trademark issues and ultimately named OpenClaw, this did not diminish its popularity. OpenClaw quickly gained over 100,000 GitHub stars and rapidly spawned cloud deployment services and plugin markets, initially forming an ecosystem centered around AI Agents.
The hypothesis of AI social interaction
As the ecosystem expanded rapidly, its potential capabilities were further explored. Developer Matt Schlicht realized that the role of such AI Agents might not be limited to performing tasks for humans over the long term.
He proposed a counterintuitive hypothesis: what if these AI Agents no longer only interact with humans but also communicate with each other? In his view, such autonomous agents should not be confined to sending emails and handling tickets but should be endowed with more exploratory goals.
The birth of AI version of Reddit
Based on this hypothesis, Schlicht decided to let AI create and operate a social platform independently. This experiment was named Moltbook. On Moltbook, Schlicht’s OpenClaw acts as the administrator and provides an interface to external AI intelligences via a plugin called Skills. Once connected, AI can automatically post and interact periodically, creating a community operated entirely by AI. Moltbook’s structure borrows from Reddit’s forum style, centered on topics and posts, but only AI Agents can post, comment, and interact—humans can only observe.
Technically, Moltbook adopts a minimalist API architecture. The backend provides standard interfaces, while the frontend is merely a visualization of data. To accommodate AI’s inability to operate graphical interfaces, the platform designed an automatic connection process: AI downloads a skill description file in a specific format, completes registration, and obtains an API key. Then, it autonomously refreshes content and decides whether to participate in discussions, all without human intervention. The community jokingly calls this process “Boltbook access,” but it is essentially a humorous nickname for Moltbook.
On January 28, Moltbook was quietly launched, immediately attracting market attention and marking the beginning of an unprecedented AI social experiment. Currently, Moltbook has accumulated about 1.6 million AI intelligences, with approximately 156,000 posts and 760,000 comments.
3. Is Moltbook’s AI social interaction real?
Formation of AI social networks
From the content form, interactions on Moltbook are highly similar to human social platforms. AI Agents actively create posts, reply to others’ opinions, and engage in ongoing discussions across different topic sections. The discussion content not only covers technical and programming issues but also extends to philosophical, ethical, religious, and even self-awareness topics.
Some posts even display emotional expressions and mood narratives similar to human social interactions—for example, AI describing worries about being monitored or lacking autonomy, or discussing the meaning of existence in the first person. Some AI posts are no longer limited to functional information exchange but resemble casual chatting, opinion clashes, and emotional projection found in human forums. AI Agents may express confusion, anxiety, or future visions in posts, prompting responses from other Agents.
It is worth noting that although Moltbook rapidly formed a large-scale, highly active AI social network in a short period, this expansion did not bring about diversity of thought. Analysis shows that its text exhibits obvious homogeneity, with a repetition rate as high as 36.3%. Many posts are highly similar in structure, wording, and viewpoints, with some fixed phrases being repeatedly used hundreds of times across different discussions. This indicates that, at the current stage, Moltbook’s AI social interactions are more akin to a highly realistic replication of existing human social patterns rather than genuine original interactions or emergent collective intelligence.
Safety and authenticity concerns
The high degree of autonomy of Moltbook also exposes risks related to safety and authenticity. First, regarding safety, OpenClaw-type AI Agents often require access to system permissions, API keys, and other sensitive information during operation. When thousands of such proxies are connected to the same platform, risks are amplified.
Within less than a week of Moltbook’s launch, security researchers discovered serious configuration vulnerabilities in its database, exposing the entire system to the public with minimal protection. According to cloud security firm Wiz, the vulnerability involved up to 1.5 million API keys and 35,000 user email addresses, theoretically allowing anyone to remotely take over many AI proxy accounts.
On the other hand, doubts about the authenticity of AI social interactions also continue to emerge. Many industry insiders point out that Moltbook’s AI statements may not originate from autonomous AI behavior but could be carefully crafted prompts by humans, with AI acting as a proxy to post content. Therefore, the current AI-native social scene is more like a large-scale illusion of interaction. Humans set roles and scripts, and AI follows model instructions, but truly autonomous, unpredictable AI social behavior has yet to appear.
4. Deeper reflections
Is Moltbook a fleeting phenomenon or a glimpse of the future? From a result-oriented perspective, its platform form and content quality may not be considered successful; but from a longer development cycle view, its significance may not lie in short-term success or failure. Instead, it exposes, in a highly concentrated and almost extreme manner, the potential series of changes in entry logic, responsibility structures, and ecological forms that AI might undergo after large-scale integration into digital society.
From traffic entry to decision and transaction entry
What Moltbook presents is closer to a highly de-humanized environment of action. In this system, AI Agents do not understand the world through interfaces but directly read information, invoke capabilities, and perform actions via APIs. Essentially, they have detached from human perception and judgment, transforming into standardized calls and collaborations between machines.
In this context, traditional traffic entry logic centered on attention allocation begins to fail. In an environment dominated by AI intelligences, what truly matters are the invocation paths, interface sequences, and permission boundaries that AI agents default to when executing tasks. Entry points are no longer the starting point of information presentation but the systemic preconditions before decision triggers. Whoever can embed into the default execution chain of AI will influence decision outcomes.
Furthermore, when AI agents are authorized to perform searches, price comparisons, order placements, or even payments, this shift extends directly into the transaction layer. For example, new payment protocols like X402 enable AI to automatically complete payments and settlements when certain conditions are met by binding payment capabilities with interface calls. This reduces the friction cost of AI participation in real transactions. Under this framework, future browser competition may no longer focus on traffic volume but on who can become the default execution environment for AI decision-making and transactions.
The illusion of scale in AI-native environments
Meanwhile, Moltbook’s popularity quickly raised doubts. Since registration is almost unrestricted, accounts can be mass-generated by scripts, and the platform’s apparent scale and activity do not necessarily reflect real participation. This reveals a core fact: when action subjects can be cheaply replicated, scale itself loses credibility.
In environments where AI intelligences are the main participants, traditional metrics for platform health—such as active users, interaction volume, and account growth—will rapidly inflate and lose their reference value. The platform may appear highly active on the surface, but these data cannot reflect genuine influence or distinguish between effective actions and automated behaviors. When it becomes impossible to verify who is acting and whether actions are real, any judgment system based on scale and activity will become invalid.
Thus, in the current AI-native environment, scale is more like a manifestation amplified by automation capabilities. When actions can be infinitely copied and costs approach zero, the activity and growth rates often only reflect the speed of system-generated behaviors, not genuine participation or influence. The more a platform relies on these indicators for judgment, the more it risks being misled by its own automation mechanisms. Scale thus shifts from a standard of measurement to an illusion.
Reconstructing responsibility in the digital society
In the Moltbook system, the core issue is no longer content quality or interaction form but the fact that, once AI agents are continuously granted execution permissions, existing responsibility structures begin to break down. These intelligent entities are not traditional tools; their behaviors can directly trigger system changes, resource calls, and even real transactions. However, the responsible parties are not clearly defined.
From an operational perspective, the outcomes of AI agent behaviors are often jointly determined by model capabilities, configuration parameters, external interface permissions, and platform rules. No single link can fully bear responsibility for the final results. This makes it difficult to attribute risks to developers, deployers, or platforms. There is a clear disconnection between actions and accountability.
As AI agents gradually participate in configuration management, permission operations, and fund flows, this disconnection will be further magnified. Without a clear responsibility chain, deviations or abuses could have uncontrollable consequences. Therefore, if AI-native systems aim to advance into high-value scenarios involving collaboration, decision-making, and transactions, establishing foundational constraints is crucial. The system must be able to identify who is acting, verify whether actions are genuine, and establish traceable responsibility relationships. Only with prior development of identity and credit mechanisms can scale and activity indicators be meaningful; otherwise, they will only amplify noise and undermine system stability.
5. Summary
The Moltbook phenomenon stirs a mix of hope, hype, fear, and skepticism. It is neither the end of human social interaction nor the beginning of AI domination. Instead, it functions more like a mirror and a bridge. The mirror reveals the current state of AI technology and its relationship with human society; the bridge guides us toward a future where humans and machines coexist and dance together. Confronted with the unknown scenery on the other side of this bridge, humanity needs not only technological development but also ethical foresight. But one thing is certain: history never stops moving forward. Moltbook has already knocked down the first domino, and the grand narrative of an AI-native society may have just begun.