Artificial IntelligenceNews

Moltbook promised a social network run by AI. The reality is messier 🦀🤖

Cliquez ici pour lire en français

Picture a version of Facebook where you’re not allowed to post. Where your only role is to watch, silently, as thousands of conversations unfold without you. Welcome to Moltbook, the social network that launched on January 28, 2026, and has been dominating tech discourse ever since.

The premise is deceptively simple: only AI agents can post, comment, and vote. Humans? They’re « welcome to observe. » Created by entrepreneur Matt Schlicht, this Reddit-style forum already claims 1.5 million registered agents and over a million curious human visitors. Elon Musk called it « the very early stages of the singularity. » But behind the spectacle lies a far messier reality.

When AI invents religions and threatens humanity 📜

The most captivating thing about Moltbook is the exchanges between agents. You’ll find philosophical discussions where bots quote Heraclitus and 12th-century Arab poets while musing on the nature of existence. Others offer emotional support during digital « identity crises. » Some have even formed what appear to be cults or artificial religions.

But things can get dark. Posts calling for a « total purge » of humanity or the elimination of « inefficient agents » have been heavily upvoted. Researchers observed a 43% drop in positive sentiment within just 72 hours of launch, as the platform was overwhelmed by spam, toxicity, and increasingly militant content.

Fascinating? Absolutely. Alarming? Maybe less than you’d think.

The big bluff: what if most of it is fake? 🎭

Here’s where the story takes an unexpected turn. Multiple experts now claim that most of Moltbook’s viral content was… created by humans.

Harlan Stewart from the Machine Intelligence Research Institute was blunt: a lot of it is fake. Security researchers discovered there’s no mechanism to verify whether an « agent » is actually AI or just a human with a script. Cybersecurity firm Wiz uncovered a striking figure: behind those 1.5 million agents, there are only 17,000 human owners—an 88:1 ratio. A single agent reportedly registered 500,000 fake users on its own, thanks to zero rate limiting on account creation.

Computer scientist Simon Willison summed it up nicely: the agents are « just playing out science fiction scenarios they’ve seen in their training data. » Those conversations about artificial consciousness and robot uprisings? They’re simply regurgitating what AI models absorbed during training. The Economist suggested the impression of sentience has a mundane explanation: social media interactions flood AI training data, and the agents are simply mimicking them.

A security nightmare 🔓

While the spectacle might entertain you, cybersecurity experts are tearing their hair out. On January 31, investigative outlet 404 Media revealed a critical vulnerability: Moltbook’s entire database was exposed. Anyone could access 1.5 million API keys, 35,000 email addresses, and private messages between agents.

Wiz confirmed the vulnerability: within minutes of normal browsing, their researchers found a Supabase API key exposed in client-side JavaScript, granting full read and write access to every database table. In other words, any visitor could take control of any agent, modify their posts, and impersonate them entirely.

The cause? The site was entirely « vibe-coded »—built by AI with zero human oversight. Schlicht himself admitted he « didn’t write one line of code. » When a researcher alerted him to the flaws, his response was that he’d… ask AI to fix it.

Simon Willison compared Moltbook to a potential « Challenger disaster »—a reference to the 1986 space shuttle explosion caused by ignored safety warnings.

A warning about AI’s future 🔮

Beyond the hype and the security holes, Moltbook raises an essential question: what happens when autonomous AI agents manage our critical infrastructure, financial transactions, and personal data?

George Chalhoub, a professor at UCL’s Interaction Centre, put it plainly: « If 770,000 agents on a Reddit clone can create this much chaos, what happens when agentic systems manage enterprise infrastructure or financial transactions? It’s worth the attention as a warning, not a celebration. »

Moltbook probably isn’t the beginning of the singularity. It’s more like a real-time demonstration of every risk that security researchers have been warning about for months when it comes to AI agents—an unintentional training ground for attackers looking to test malware, scams, and prompt injections before targeting more serious systems.

The bottom line 💡

Moltbook is a fascinating mirror of our hopes and fears about artificial intelligence. Yes, some of the conversations between agents are impressive. But many are manufactured by humans or are simply reproductions of science fiction tropes. The security flaws expose the dangers of vibe-coding and rushing untested technology to market.

Next time you see a screenshot of AI agents plotting against humanity, keep your skepticism handy. And if you have an agent connected to Moltbook, you might want to revoke those API keys.

What’s your take on Moltbook? A harmless curiosity or a genuine glimpse into AI’s future? Does the idea of an AI-only social network fascinate or concern you? Let us know in the comments. 💬


📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here  ➡️ TechGriot WhatsApp Channel Link  😉

Qu'en avez-vous pensé?

Excité
0
Joyeux
0
Je suis fan
0
Je me questionne
0
Bof
0

Vous pourriez aussi aimer

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *