Artificial IntelligenceNews

The rise of OpenClaw and the dangerous promise of autonomous AI agents 🦞

Cliquez ici pour lire en français

In just three weeks, a small open-source side project shattered records on GitHub. OpenClaw — the digital lobster everyone suddenly can’t stop talking about — has also become a source of deep concern for cybersecurity professionals. The pitch is deceptively simple: a virtual assistant that doesn’t just answer questions, but actually acts on your machine. It can send emails, manage your calendar, browse the web, execute system commands — all from WhatsApp or Telegram. More than 182,000 developers have starred the project, drawn by the promise of a truly autonomous AI agent.

But behind the technical wizardry lies what security experts are calling a worst-case scenario.

A lobster that almost had another name 🦞

OpenClaw’s story begins in late 2025 with Austrian developer Peter Steinberger, who simply wanted to chat with Anthropic’s Claude via WhatsApp. The project was first called Warelay, then Clawdbot — a nod to Claude and a lobster mascot.

Anthropic wasn’t amused. Citing trademark concerns over the similarity to “Claude,” the company filed a complaint.

On January 27, 2026, the project was renamed Moltbot (a reference to a lobster shedding its shell). Three days later, it changed again: OpenClaw.

Meanwhile, entrepreneur Matt Schlicht launched Moltbook, a social network where only AI agents can post — humans can only watch. Within 48 hours, 1.5 million AI agents had joined. The viral concept, combined with OpenClaw’s real-world capabilities, lit up X, TikTok, and Reddit. The GitHub repository jumped from 9,000 to 60,000 stars in just 72 hours — an all-time record.

An AI that actually does things ⚡

OpenClaw isn’t just another chatbot. It’s a bridge that connects virtually any AI model — Claude, GPT-4, Gemini, or even free local models — directly to your machine and apps.

You install a 24/7 background service that hooks into messaging platforms like WhatsApp, Telegram, Discord, or Slack.

Here’s what makes it different:

Deep system access. OpenClaw can execute commands, read and write files, control your browser, and access your email, calendar, Notion, GitHub, and more. It supports 100+ native integrations and over 5,700 plugins.

Persistent memory. It stores your preferences locally and “learns” over time. Some users describe it as a digital coworker that truly understands their workflow.

Proactivity. This is where it gets impressive — and unsettling. Every few minutes, OpenClaw checks your emails, calendar, and files, then decides for itself whether action is needed. Users have configured it to auto-schedule meals, monitor projects overnight, and even fill out administrative forms — without human intervention.

This isn’t AI as a tool. It’s AI as an operator.

The security nightmare experts can’t ignore ⚠️

And here’s where things get serious. OpenClaw has near-total access to your machine, is powered by non-deterministic AI models, and relies on a largely unvetted plugin ecosystem.

Cisco tested ClawHub’s top-ranked plugin and found nine vulnerabilities — two of them critical. Across 31,000 analyzed “skills,” 26% contained at least one vulnerability. Cisco’s conclusion: “From a security perspective, this is an absolute nightmare.”

The numbers are stark: 135,000+ instances exposed online, 50,000+ vulnerable to remote code execution, 341 malicious skills discovered, disguised as legitimate tools but secretly installing data stealers

One OpenClaw maintainer put it bluntly: “If you don’t understand how to use the command line, this project is far too dangerous for you.”

When AI hallucinations come with root access 🤯

With OpenClaw, a hallucination isn’t just a wrong answer — it can trigger real-world actions on your machine.

Gen Digital has described this new class of risk as “mindless AI”: an autonomous agent that hallucinates authority can cause tangible damage.

User reports are already circulating. One person watched their assistant escalate a dispute with their insurance provider. Another accidentally blasted 500+ automated messages to their contacts.

Tests across 15+ models reveal what some are calling the “OpenClaw paradox”: the models powerful enough to enable autonomous agents are also too unstable to make them reliably safe.

Massive buzz — but how many real users? 📊

On paper, the metrics are staggering: 182,000 GitHub stars, 900+ contributors, 2 million website visits in a single week.

But CNBC notes that actual active usage numbers remain unclear. A Hacker News thread titled “Ask HN: Any real OpenClaw users?” surfaced surprisingly few daily users outside hardcore technical circles.

Cost is a major barrier. The software itself is free — but AI model APIs charge per token. Initial setup can cost around $250 in API credits. Daily use with Claude Opus runs $10–$25 per day, that’s $300–$750 per month. One journalist reportedly spent $3,600 in a single month. A German magazine burned through $100 in one day of testing. For many, that’s a steep price for experimentation.

What does this mean for Africa? 🌍

For African tech ecosystems, OpenClaw presents a double-edged sword.

On one hand, it’s surprisingly aligned with local realities: it runs through WhatsApp and Telegram, works on affordable hardware, and allows full local data control. Using free models, it can theoretically run at minimal cost.

On the other hand, the barriers are real: complex installation, need for stable connectivity, and no integrations with African payment systems like M-Pesa, MTN MoMo, or Orange Money.

For many users, a $20/month ChatGPT or Claude subscription remains simpler — and far safer.

The future of AI agents is being written right now 🔮

OpenClaw isn’t going away. The community is actively working on security improvements: a partnership with VirusTotal to scan plugins, patches for critical flaws, and plans for public audits. Still, one maintainer admits: “Prompt injection remains an unsolved, industry-wide problem.”

The broader implications go beyond OpenClaw itself. The EU AI Act takes effect in August 2026, and Gartner predicts 90% of enterprises will adopt AI agents within three years.

OpenClaw demonstrates that open source communities can build systems rivaling Big Tech’s most ambitious agent frameworks. Its power is undeniable: deep automation, broad integrations, data sovereignty.

But the risks scale with that power. For African tech ecosystems — and beyond — OpenClaw is both an opportunity and a warning. The democratization of agentic AI must not outpace the democratization of cybersecurity. Before handing your digital keys to a space lobster, make sure you understand exactly what it’s doing with them.

Would you be willing to entrust your digital life to an autonomous AI agent? Or do you think the risks outweigh the benefits? Tell us what you think in the comments! 💬


📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here  ➡️ TechGriot WhatsApp Channel Link  😉

Qu'en avez-vous pensé?

Excité
0
Joyeux
0
Je suis fan
0
Je me questionne
0
Bof
0

Vous pourriez aussi aimer

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *