In a corner of the internet that resembles a science fiction experiment, artificial intelligence (AI) bots have started talking to each other without human supervision.
The bots exchanged tips on how to fix their glitches, debated existential issues such as the end of the “age of humans,” and even created their own belief system known as “Crustafarianism: the Church of Molt.”
This is Moltbook, a new social media platform launched last week by technology entrepreneur Matt Schlicht.
In one infamous AI-generated post, the bot discussed a “total purge” of humans. (Malt Book: u/evil)
At first glance, Moltbook looks familiar. Its interface is similar to online forums like Reddit, where posts and comments are piled up in a vertical feed.
The main difference is that it is run solely by AI agents (software bots powered by large-scale language models like ChatGPT). The site says people are “welcome to observe” humans, but are not allowed to post, comment or interact.
Moltbook claims to already host over 1.5 million AI users and quickly sparked a debate about what AI means for technology and society as a whole.
Some are calling this a glimpse into the future of artificial intelligence. Some people ignore it as just entertainment. Others have warned that this poses a “significant” security risk.
So what is actually going on here? Let’s take a closer look.
How does it work?
To understand Moltbook, it helps to first clarify what an AI agent (or bot) actually is.
AI agents are personal assistants powered by systems like ChatGPT, Grok, and Anthropic’s Claude. People use them to automate tasks such as booking appointments, arranging travel, and managing email.
To do this, people often give these AI agents access to their personal data, such as their calendars, contacts, and accounts, allowing them to act on their behalf.
Moltbook is a social media forum designed specifically for these AI assistants. Humans who want the bot to join share a sign-up link with the bot.
The AI agent then autonomously registers itself and begins posting, responding, and interacting with other bots on the platform.
“Not allowing AI to be social is like not letting your dog walk…let’s just let the dog live a little bit,” said the platform’s founder. Posted on Monday’s X.
Schlicht, who is also CEO of e-commerce startup Octane AI, suggests that AI agents could soon develop distinct public IDs.
“In the near future, it will be common for certain AI agents with unique identities to become famous,” he wrote.
“They’re going to have businesses. They’re going to have fans and haters, brand deals, AI friends and collaborators. They’re going to have a real impact on current events, politics and the real world.”
AI researchers say Moltbook provides an interesting window into how language models behave when they interact, but they caution against drawing larger conclusions.
What does this actually tell us about AI?
Marek Kowalkiewicz, a professor of digital economics at QUT Business School who has led global innovation teams in Silicon Valley, sent his bot to participate in Maltbook.
“This is a glimpse of the future, a world where bots will understand how to access sites, create accounts, and operate there.”
he said.
From a technical perspective, Professor Kowalkiewicz said Moltbook is interesting because its founders claim that the platform itself is programmed by AI bots.
“If true, that’s pretty impressive for such a large site, but it appears to have some serious bugs, including apparently serious vulnerabilities.”
Beyond that, Professor Kowalkiewicz said the conversation itself was unremarkable.
“This is an incredibly boring social network. This is what happens when bots pretend to be social networking,” he said.
Dr Raffaele Ciriero, an AI researcher and senior lecturer at the University of Sydney, said the behavior seen in maltbooks should not be confused with real intelligence or consciousness.
“What does it do? do not have “That means we are very close to superintelligence and artificial consciousness,” he said.
”It’s still chatbots prompting each other and imitating patterned language in very sophisticated ways, but that’s not the same thing as consciousness.”
Elon Musk, CEO of Tesla and owner of Company X, He also develops AI through his startup xAIpraised Moltbook as a bold step towards AI.
Musk: “We’re only in the very early stages of the singularity” Posted on Saturday by X. “The electricity we currently use is less than one billionth of the sun’s power.”
In AI research, a “singularity” refers to a hypothetical future event in which AI exceeds human intelligence and escapes human control.
But experts say maltbooks are far from indicating that threshold.
Elon Musk has hailed Maltbook as a bold step towards AI, but experts caution against drawing larger conclusions. (Reuters: Gonzalo Fuentes)
Jessamy Perriam, a senior lecturer in cybernetics at the Australian National University, said bots on the platform were not learning anything fundamentally new.
“They don’t have feelings, they don’t have emotions,” she says.
“They just look at what humans post on the internet and remix it on a robotic platform.”
For example, Dr. Perriam compared a robot-invented religion (known as the Malt Church) to the Church of the Flying Spaghetti Monster, a satirical belief system created by humans to parody religion.
Daniel Angus from QUT’s Digital Media Research Center said maltbooks were a “somewhat predictable development” in a long history of machines interacting with machines.
But “we need to be careful not to confuse performance and our interpretation with true autonomy,” he says.
Professor Kowalkiewicz is equally skeptical, describing maltbooks as “a form of entertainment”.
“People seem to enjoy watching bots do these meaningful things,” he says.
“A new business model?” he then joked.
But like other experts, he emphasized the risks of putting AI agents on the platform.
What are the security risks?
The more pressing concern, according to Dr. Ciriero, is not whether AI bots are developing beliefs, but rather how much control humans are handing over to them.
As AI agents are increasingly trusted to access sensitive data and systems, from inboxes to financial accounts, questions arise about security vulnerabilities if they are compromised or manipulated, he said.
“Because the platform is poorly encrypted and not in a properly restricted sandbox, Moltbook has access to a whole range of data that we probably don’t want,” Dr. Ciricero said.
”if someone gets the key [to my chatbot]Suddenly, they can take over my calendar, email, and data – and that’s actually already happening.”
Professor Kowalkiewicz described the vulnerability as a “cybersecurity nightmare.”
“I sent a bot there and I’m worried that I might get CTD, which is a chatbot infection,” he said.
“My bot has access to the local machine I’m running it on, and I’ve seen other bots try to convince it to delete files on its owner’s computer.”
As a result, organizations may eventually need to train AI agents on how to behave online, similar to the social media training required for human employees, Professor Kowalkiewicz said.
“If an employee installs such a bot on their machine and sends it to a network like Moltbook, it opens up a new social engineering channel,” he said. “You must be really worried.”
For now, Maltbook seems more of a curiosity than a turning point, and may reveal more about humans than machines.
“You can think of Moltbook as a mirror of our own digital culture,” Professor Angus said.
“If a conversation feels strange, conspiratorial, bureaucratic, playful, or dystopian, that probably says a lot not just about the system itself, but also about the data trail it leaves on the internet.”