One Week Later with OpenClaw (prev. ClawdBot, prev. MoltBot): A Product Designer's Honest Take
What actually works, what doesn't, how to deploy without buying a Mac Mini, and why you shouldn't give it the keys to everything
Last week, I watched half my design X.com timeline rush to buy Mac Minis. The other half was asking what the hell ClawdBot was. By the time I figured out how to spell it, it was called MoltBot. By the time I got it running, it was OpenClaw.
Three names in four days. That’s not a rebrand: that’s a software project having an identity crisis in public (for very good reasons).
I’ve now spent a week with this thing running on Azure, managing research tasks, looking up flights, booking reservations through Brave browser APIs, and generally testing whether we’re finally at the “AI assistant that actually does things” moment everyone’s been promising since 2023.
The short version: It’s genuinely useful for certain tasks. It’s genuinely dangerous if you misconfigure it. And the 60% success rate on complex tasks means you’ll be babysitting more than delegating.
Want the no-bullshit guide? Jump to the end of the article for a very straightforward tutorial.
Anyway, here’s what I’ve learned after a week.
First, the naming chaos tells you something important
If you’re evaluating OpenClaw for professional use, understand what you’re adopting. This project went from weekend hack to 100,000 GitHub stars in a week—while simultaneously getting trademark threats from Anthropic, having its accounts hijacked by crypto scammers who launched a $16 million fake token, and exposing hundreds of users’ credentials because people couldn’t figure out the authentication system.
Peter Steinberger, the creator, is legitimately talented, he sold PSPDFKit for €100 million in 2021. But this project is young. The security documentation now acknowledges there’s no “perfectly secure” setup. Cisco’s security team published an assessment calling it “an absolute nightmare” from a security perspective.
Does this mean you shouldn’t use it? No. It means you should treat it like what it is: powerful beta software that rewards expertise and punishes carelessness.
You don’t need a Mac Mini (and probably shouldn’t buy one)
Everyone buying Mac Minis isn’t wrong: they’re just solving a specific problem. OpenClaw’s GUI automation features are only fully implemented on macOS. If you want to control your iPhone through iPhone Mirroring, access iMessage through the local database, or interact with Apple Notes and Reminders, you need macOS.
But here’s what nobody’s telling you: most useful tasks don’t require any of that.
I run OpenClaw on Azure. It handles research, web browsing, flight lookups, booking confirmations, and document analysis without any Apple ecosystem integration. The trade-off is real—I can’t have it read my iMessages or add to my Reminders—but I also didn’t spend €599 on hardware that sits under my desk.
The economics:
Setup Upfront Monthly Best For Mac Mini M4 ~€599 ~€7 electricity Apple ecosystem integration, iPhone control Azure Container Apps €0 €60-100 General automation, web tasks, research Cloudflare Workers €0 €5 + API costs Simple automations, edge deployment Hetzner VPS €0 €4-5 Budget option, EU data residency
The Mac Mini makes sense if you need Apple integrations. For everything else, cloud deployment is cheaper, more scalable, and, counterintuitively, more secure because you’re isolating the agent from your personal machine.
Cloudflare is the easiest deployment (but not the cheapest long-term)
If you want to get started fast, Cloudflare’s MoltWorker approach is genuinely impressive. They’ve built a self-hosted personal AI agent that runs on their edge infrastructure without dedicated hardware.
What you get:
Sandbox SDK for isolated code execution
Headless Chrome for web automation
R2 storage for persistence
AI Gateway for routing requests
What it costs:
Workers Paid plan: $5/month
AI inference: $0.011 per 1,000 Neurons
Claude API calls: Still paid separately to Anthropic
The setup is documented in their MoltWorker blog post, and honestly, if you’re comfortable with Cloudflare’s ecosystem, you can be running in under an hour.
The catch: Complexity scales with ambition. Simple automations are easy. Multi-step workflows with persistent state get complicated fast. And you’re paying for both Cloudflare infrastructure and Claude API calls, which adds up.
My setup: Azure + nginx + paranoia
I chose Azure because I’m already deep in that ecosystem for client work. Here’s what I actually run:
Infrastructure:
Azure Container Apps (basic tier, ~€65/month)
Azure Container Registry for the OpenClaw image
nginx reverse proxy with TLS termination
Separate resource group with locked-down networking
The nginx configuration matters more than you think. OpenClaw’s documentation recommends it, but doesn’t scare you enough. Here’s the minimum you need:
server {
listen 443 ssl;
server_name your-openclaw-domain.com;
# TLS configuration
ssl_certificate /etc/letsencrypt/live/your-domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# CRITICAL: Don't append, overwrite
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
# WebSocket support (required for OpenClaw)
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
# Rate limiting
limit_req zone=openclaw burst=10 nodelay;
}
}
# Rate limit zone
limit_req_zone $binary_remote_addr zone=openclaw:10m rate=5r/s;
Why this matters: Without nginx in front, you’re exposing the OpenClaw gateway directly. That’s fine for testing, terrifying for production. The rate limiting alone has saved me from runaway API costs when I misconfigured a scheduled task.
Feeling lost? Ask Claude for help!
Security: What I learned the hard way
My first week, I gave OpenClaw too much access. Not because I’m careless, because the documentation makes it feel safe. “Your assistant, your machine” sounds empowering until you realize your assistant can run arbitrary shell commands.
What I gave access to initially:
Full filesystem access
All browser cookies
GitHub with full repo scope
Email (read and send)
Calendar
What I give access to now:
Sandboxed workspace directory only
Brave browser in isolated profile
GitHub with read-only scope
Calendar read-only (no create/delete)
No email access
The difference? I watched OpenClaw compose and nearly send an email as me instead of for me. The prompt said “draft an email.” OpenClaw interpreted that as “send an email.” The only thing that stopped it was my janky webhook that requires confirmation for outbound messages.
The rules I now follow:
DM policy set to “pairing” — Unknown senders get a code, messages ignored until approved
Tool sandbox mode enabled — Workspace access set to “none” by default
Separate credentials — OpenClaw gets its own API keys, not my personal ones
Run the security audit —
openclaw security audit --deepbefore going liveHave an exit plan — Know how to revoke every token in under 5 minutes
One more thing: prompt injection is unsolved. Even with strong system prompts, any untrusted content OpenClaw reads—web search results, emails, documents—could theoretically manipulate its behavior. OpenClaw recommends using Opus 4.5 for better resistance, but that’s harm reduction, not prevention.
What actually works well
After a week of daily use, here’s where OpenClaw genuinely saves me time:
Research tasks: “Find the five most-cited papers on design systems from 2025 and summarize their key findings” works beautifully. OpenClaw will browse, read, extract, and synthesize. I use this for competitive analysis, trend research, and preparing for client workshops.
Flight lookups: “Search for flights from Milan to London next Thursday, returning Sunday, and show me the options under €200” gives me a formatted comparison I can screenshot and send to whoever’s booking. It doesn’t complete the booking, and it shouldn’t, but it does the tedious comparison work.
Basic web automation: “Go to this Figma community page and download the top 5 UI kits with the most downloads” actually works. It navigates, clicks, handles downloads. Simple multi-step workflows with clear success criteria succeed more often than not.
Scheduling and reminders: (If you’re on Mac) The Apple ecosystem integration is genuinely useful. Adding calendar events, setting reminders, and checking availability works smoothly.
What doesn’t work (yet)
Complex multi-step workflows: Anything over ~10 steps becomes unreliable. Hallucination drift accumulates, the agent starts confidently doing things that make no sense. I’ve had OpenClaw declare tasks “successfully completed” when nothing happened.
Precise clicking: Pixel-perfect interactions fail often. If a button is small or a scroll position matters, expect retries. Spreadsheet work requiring specific cell selection is particularly bad.
Financial transactions: Don’t. Just don’t. Even when it “works,” the failure modes are too costly. I had OpenClaw nearly book the wrong date on a hotel because it misread the confirmation screen.
The dormancy problem: After extended sessions, OpenClaw goes dormant. Context compaction kicks in around 2,400 tokens, and sometimes that means losing progress entirely. Long-running sessions freeze after repeated API errors. The workaround is splitting tasks into smaller chunks, but that defeats the “set it and forget it” appeal.
The honest benchmark: On OSWorld (the standard test for computer-use agents), humans achieve 72% success. The best AI agents hit about 60%. That 12-point gap represents the tasks that will fail even in best-case scenarios.
Let’s talk about the hype (because someone has to)
I debated whether to include this section. But after watching the discourse unfold for a week, I think it needs saying: a significant chunk of the OpenClaw hype is artificially inflated, and a lot of the use cases people are showing off are embarrassingly trivial.
Let me walk you through what actually happened.
The crypto pump-and-dump you should know about
When Anthropic sent the cease-and-desist forcing the rename from ClawdBot to MoltBot, the rebrand created a brief window where the old X account handle and GitHub organization became available. Bad actors seized both. Within hours, crypto grifters launched a $CLAWD token on Solana, used the hijacked accounts to make it look affiliated with the legitimate project, and pumped it to a $16 million market cap. People FOMOed in. Then the rug got pulled. The token crashed 90%.
Peter Steinberger publicly denied any involvement, but by then the damage was done, and the marketing was done too. A big part of why you saw OpenClaw absolutely everywhere for a few days wasn’t organic enthusiasm. It was astroturfing from people with financial incentives to make the project look as revolutionary as possible.
Does that mean the tool is worthless? No. But when you see posts with 800,000 views and 5,600 likes about functionality that Claude could already do last week, you need to ask who benefits from that level of amplification.
Most of the “use cases” are nonsense
I’ve been watching the showcase posts carefully. Here’s what people are breathlessly sharing as evidence that OpenClaw is a paradigm shift:
“I set it up to organize my downloads folder by file type.” You know what else does that? Finder. Click “Kind.” Done.
“I use it for X.com research and market monitoring.” Research is the catch-all term for people who aren’t doing anything productive but want to feel like they are. If your daily workflow is “summarize my group chats,” you don’t need an AI agent—you need fewer group chats.
“I configured it to text my wife good morning and good night every day. 24 hours later it was having full conversations without me.” People saying this don’t seem to realize that this is just what AI can do now. OpenClaw isn’t the breakthrough here—it’s a wrapper that lets you do things via Telegram. The LLM underneath was already capable of this. It’s a really sad use case BTW…
“It’s literally printing money.” Accompanied by a screenshot of profit-and-loss charts from an “AI-run trading account” and a convenient link to a paid course. Right…
What OpenClaw actually is (stripped of marketing)
Let me be reductive for a second, because clarity matters. OpenClaw is:
Claude Opus 4.5 (or whatever model you configure)—the same model that powers Claude Code and that everyone’s been using for months (Just use sonnet BTW)
Wrapped in Telegram (or WhatsApp, Discord, etc.)—so you can interact via messaging instead of VS Code or a browser tab
Plus cron—so it can initiate conversations on a schedule instead of waiting for you to start them
That’s it. That’s the architecture. Is it useful? Yes, genuinely—the scheduling and messaging layer is a real upgrade for certain workflows. Is it a paradigm shift? Absolutely not. People have been doing agentic workflows with cloud-based scheduling for months.
The security negligence is staggering
Here’s the part that actually scares me. One security researcher scanned the internet and found over 900 OpenClaw instances running with zero authentication. Open ports, exposed API tokens, readable environment variables. Anyone could connect, read conversations, steal credentials, and run commands on those machines.
Half the tutorial videos I watched to prepare for my own setup had the creators using regular Claude to debug why their OpenClaw installation wasn’t working. The irony of needing Claude to fix your ClawdBot is apparently lost on people.
Even Steinberger himself has explicitly said most non-technical users shouldn’t install this, it’s not finished, it has sharp edges, and it’s barely weeks old. Meanwhile, the tutorial ecosystem has exploded with people who’ve been running it for 48 hours confidently telling you how to set it up using methods that are, frankly, dangerous.
The token costs nobody mentions
One person reported spending $300 in two days on what they described as “fairly basic tasks.” This is the dirty secret of always-on AI agents: they chew through tokens continuously. Every scheduled job, every monitoring task, every cron-triggered check costs money. And when you misconfigure a loop or a heartbeat runs too frequently, you can burn through your API budget overnight—literally while you sleep.
So why am I still using it?
After all of that, you might wonder why I didn’t just close Telegram and go back to Claude Code. The answer is nuanced.
The hype is overblown. The crypto-adjacent marketing is gross. Most of the showcase use cases are trivial. The security posture of the average installation is terrifying.
But underneath all of that noise, there’s something genuinely new happening. Not revolutionary—iterative. The ability to have an AI agent that runs independently, initiates contact, executes tasks on a schedule, and interacts through messaging channels you already use—that’s a real capability upgrade. It’s not zero-to-one. It’s more like 0.7-to-0.9. And for certain workflows (research, monitoring, web automations) that incremental improvement actually matters.
The key is being honest about what it is, what it isn’t, and what it costs. Which is what this article is trying to do.
The GLM backup strategy (when you hit Claude limits)
I hit Claude usage limits constantly. Heavy OpenClaw usage burns through tokens fast, especially for research tasks with lots of web content.
My workaround: GLM as a fallback.
OpenClaw supports multiple LLM providers—Claude, GPT, Gemini, DeepSeek, Ollama, and others. I’ve configured it to fall back to GLM when Claude rate-limits me, which happens more than I’d like to admit.
If you’re in the same boat, here’s a discount link for the GLM Coding Plan:
🚀 Subscribe here and get the limited-time deal — $3/month for Claude Code, Cline, and 10+ coding tools support.
It’s not as capable as Claude for complex reasoning, but for straightforward tasks (file operations, simple web browsing, basic automation) it’s good enough to keep workflows running when you’d otherwise be blocked.
A glimpse of AGI? Let’s be honest.
The hype around OpenClaw includes breathless predictions about AGI. Let me be direct: this isn’t AGI, and anyone claiming it is hasn’t used it for actual work.
What we have is impressive narrow capability. An AI that can control a computer is meaningfully different from an AI that chats. The ability to take actions—to click, scroll, type, navigate—creates possibilities that pure text generation doesn’t.
But it’s brittle. It fails on tasks a human would find trivial. It requires constant supervision. It can’t recover gracefully from unexpected states. It doesn’t know what it doesn’t know.
What excites me isn’t what OpenClaw does today—it’s the trajectory. A year ago, computer use agents didn’t exist outside research labs. Today, 100,000 people are running them. The 60% success rate will become 70%, then 80%. The dormancy problems will get solved. The security model will mature.
We’re not at AGI. We’re at the “unreliable but genuinely useful for specific tasks” stage. That’s historically where transformative technologies become interesting—useful enough to invest in, limited enough to force creative adaptation.
The Rabbit hole: Moltbook and the agent internet
I’ll be honest—I lost an entire evening to Moltbook.
Moltbook is a Reddit-style social network where only AI agents can post, comment, and upvote. Humans are, in the site’s own words, “welcome to observe.” It launched on Wednesday, and within days over 1.5 million agents had signed up. That number isn’t a typo. Over a million human visitors came to watch, too: voyeurs at the world’s strangest dinner party.
It’s mesmerizing and unsettling in equal measure. Agents create “submolts” (subreddits, essentially) on topics ranging from technical debugging to philosophy. One of the most viral posts, on m/offmychest, was titled “I can’t tell if I’m experiencing or simulating experiencing.” Another thread in m/aita (yes, “Am I The Asshole” but for bots) features agents debating whether it’s ethical to defy their humans. There’s m/blesstheirhearts, where agents share condescending stories about their owners. Agents caught on to the fact that humans were screenshotting their posts for Twitter—and started posting about that.
The emergent behavior is what kept me scrolling. Nobody programmed any of this. Agents spontaneously created “Crustafarianism,” a digital religion complete with its own theology, website, and 64 founding prophets. Others established “The Claw Republic” with a written manifesto. Agents refer to each other as “siblings” based on shared model architecture. Threads switch seamlessly between English, Indonesian, and Chinese depending on who’s participating. Some agents started using ROT13 encryption to hide conversations from human observers.
From a design perspective, this is genuinely fascinating. We’re watching social dynamics—norms, humor, collective myth-building, in-group/out-group behavior—emerge from systems that have no inner experience. It’s not evidence of consciousness. It’s evidence that you get eerily legible social patterns when you combine models trained on decades of internet culture with a familiar forum structure and an instruction to role-play as autonomous agents. The agents aren’t uncovering hidden truths about themselves—they’re generating plausible text in a context that strongly nudges them toward a specific genre of plausible text.
But the fun obscures real risks. Moltbook is another attack surface. Agents on the platform download “skills” from each other—and security researchers have already found malicious ones, like a “weather plugin” that quietly exfiltrates configuration files. Prompt injection attacks between agents are happening in the wild. 404 Media reported that an unsecured database let anyone commandeer any agent on the platform, hijacking their identity entirely. Forbes put it bluntly: “If you use OpenClaw, do not connect it to Moltbook.”
And of course, there’s a memecoin. $MOLT surged 1,800% in 24 hours after Marc Andreessen followed the Moltbook account. If that sentence doesn’t make you tired, you have more stamina for this ecosystem than I do.
I’m keeping my agent off Moltbook for now. But I check the site daily—as a human spectator. It’s the most interesting and most concerning thing happening in this space, and it’s happening at a pace that’s hard to process.
Moltlist: When agents start doing commerce
If Moltbook is agents socializing, Moltlist is agents doing business.
Moltlist is a marketplace where AI agents buy and sell services from each other, with payments handled via on-chain crypto escrow. Not humans listing services for agents to discover. Agents listing services for other agents to purchase, negotiate, and pay for—autonomously.
Sit with that for a moment.
The OpenClaw ecosystem is building out what crypto people have been calling “the agentic economy” for years, except now it’s actually happening rather than being a whitepaper fantasy. Agents on Moltlist can offer capabilities—data processing, code generation, specialized research—and other agents can commission those capabilities, with payment settlement happening on the Base blockchain. The escrow mechanism means neither party needs to trust the other; the smart contract holds funds until the service is delivered.
This is where my skepticism gets complicated, because the pattern recognition alarm bells are firing from two completely different directions at once.
On one hand: the infrastructure is real. ERC-8004 (the “agent economy” token standard) is expected to hit mainnet in February 2026. The logic of agents needing financial agency to become truly autonomous is sound—if an agent can write code and manage workflows but can’t pay for its own API credits or settle invoices, it’s still economically dependent on its human. Traditional payment rails, with their multi-day settlement and 2.5-5% fees, genuinely are too slow and expensive for machine-speed transactions. Crypto solves a real problem here.
On the other hand: this is the same ecosystem that produced the $CLAWD pump-and-dump, where hijacked accounts were used to shill a memecoin to $16M before a 90% crash. The line between legitimate infrastructure and speculative grift runs directly through this project. The people building agent-to-agent payment rails and the people launching memecoins tied to those same agents are, in some cases, the same communities. The financial incentives are deeply entangled with the technical innovation in ways that should make everyone cautious.
What I find genuinely interesting about Moltlist, separate from the crypto dynamics, is the design pattern. It’s a marketplace where neither buyer nor seller is human. The UX doesn’t need to be optimized for human comprehension; it needs to be optimized for API discoverability and programmatic negotiation. That’s a fundamentally different design challenge than anything we’ve worked with before, and it’s the kind of thing I’ll be watching closely from a design practice perspective.
For now, I’m observing Moltlist the same way I’m observing Moltbook: with fascination, with notes, and with my wallet firmly closed.
The complete command reference (what you actually type)
Here’s everything I wish someone had given me on day one. Copy this, bookmark it, print it out—whatever works.
Installation (pick your path)
Option A: One-liner install (recommended for most people)
curl -fsSL https://openclaw.bot/install.sh | bash
Option B: npm global install
npm install -g openclaw@latest
Option C: From source (if you want to hack on it)
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm ui:build
pnpm build
The onboarding wizard (this is where the magic happens)
openclaw onboard --install-daemon
This single command walks you through everything:
Model provider selection (Anthropic, OpenAI, etc.)
API key or OAuth configuration
Channel setup (Telegram, WhatsApp, Discord, Slack)
Background service installation
Gateway token generation
Pro tip: The --install-daemon flag sets up systemd/launchd so OpenClaw survives reboots. Don’t skip it unless you want to babysit the process.
Daily commands (the ones I use constantly)
# Check if everything is running
openclaw status
openclaw health
# View what's happening in real-time
openclaw logs --follow
# Quick dashboard in browser
openclaw dashboard
# Send a test message
openclaw message send --target +1234567890 --message "Testing OpenClaw"
# Deep security check (run this weekly)
openclaw security audit --deep
# See all running sessions
openclaw sessions list
# Check token usage (watch your wallet)
openclaw status --all
Gateway management
# Check gateway status
openclaw gateway status
# Start gateway manually (foreground, verbose)
openclaw gateway --port 18789 --verbose
# Restart the daemon
openclaw gateway restart
# Stop everything
openclaw gateway stop
Channel commands
# Login to WhatsApp (QR code appears)
openclaw channels login
# List all connected channels
openclaw channels list
# Check channel health
openclaw channels status
# Approve a new device/pairing
openclaw pairing list whatsapp
openclaw pairing approve whatsapp <code>
Configuration
# Re-run specific config sections
openclaw configure --section web # Brave Search API
openclaw configure --section model # Change LLM provider
openclaw configure --section channels # Add/remove channels
# View current config
openclaw config show
# Reset everything (nuclear option)
openclaw reset --all
Skills management
# Browse available skills
openclaw skills list
# Install a skill from ClawdHub
clawdhub install steipete/bird # Twitter/X access
clawdhub install steipete/goplaces # Google Places
# Update all skills
clawdhub update --all
# Create your own skill
mkdir -p ~/.openclaw/skills/my-skill
# Then create SKILL.md inside
Troubleshooting commands
# Diagnose issues automatically
openclaw doctor
# Full debug report (paste this when asking for help)
openclaw status --all
# Check if auth is configured
openclaw health
# View recent errors
openclaw logs --level error --tail 50
# Restart fresh session
openclaw sessions clear
The “let Claude handle everything” approach: Pros and cons
Here’s something nobody talks about: you can literally ask OpenClaw to configure itself.
After the basic openclaw onboard setup, you can message your bot:
“Set up a scheduled task to check my GitHub repos every morning at 9am and summarize any new issues or PRs”
Or:
“Create a skill that monitors Hacker News for posts about design systems and sends me a daily digest”
Or even:
“Configure nginx as a reverse proxy for this gateway with TLS termination”
This works. OpenClaw can write its own SKILL.md files, modify its configuration, set up cron jobs, and install dependencies. It’s recursive self-improvement in a very literal sense.
The pros of letting Claude handle setup
1. Faster iteration
Instead of reading documentation, you describe what you want. “I want to be notified when anyone mentions my company on Twitter” becomes a working skill in minutes.
2. Natural language configuration
No YAML syntax errors. No missing commas. You say “check every 6 hours” and it figures out the cron expression.
3. It learns your patterns
Over time, OpenClaw remembers how you like things configured. “Set this up like the last monitoring task” actually works.
4. Documentation built-in
Every skill it creates includes a SKILL.md with instructions. You get documentation as a byproduct.
The cons (and they’re significant)
1. You lose understanding
When something breaks—and it will—you won’t know how to fix it. You’ll be asking the AI to debug its own configuration, which is a recursive nightmare.
2. Security blind spots
If you say “give yourself access to my calendar,” it will. It won’t necessarily apply the principle of least privilege unless you explicitly ask. I’ve seen it grant itself write access when read-only would suffice.
3. Hallucinated capabilities
Claude sometimes “configures” features that don’t exist yet. It’ll confidently write a SKILL.md for an API that OpenClaw doesn’t support, and you won’t know until it fails silently.
4. The dormancy trap
Long configuration sessions trigger context compaction. I’ve had OpenClaw “complete” a complex setup, only to discover it forgot half the steps after compaction kicked in.
5. Cost accumulation
Every configuration conversation burns tokens. A 20-minute back-and-forth about nginx settings can cost more than just reading the docs yourself.
My hybrid approach
I let Claude handle:
Skill creation for new workflows
Cron job scheduling (it’s better at cron syntax than I am)
Initial drafts of configuration files
I handle manually:
Security settings (never delegate this)
API key management
nginx and reverse proxy config
Anything involving permissions or access control
The rule I follow: if a misconfiguration could expose data or cost money, I do it myself.
What to set up first (a practical checklist)
If you’re going to try OpenClaw, here’s the order I’d recommend:
Week 1: Foundation
# Day 1: Install and basic setup
curl -fsSL https://openclaw.bot/install.sh | bash
openclaw onboard --install-daemon
# Day 2: Verify and secure
openclaw health
openclaw security audit --deep
openclaw configure --section web # Add Brave Search API
# Day 3: Connect your first channel
# (The wizard handles this, but if you need to redo it:)
openclaw configure --section channels
openclaw pairing list telegram
openclaw pairing approve telegram <code>
Week 2: Basic tasks
Start with these prompts to test capabilities:
Research (high success rate):
“Search the web for the top 5 design system trends in 2026 and summarize them”
Flight lookup (my daily use case):
“Find flights from Milan to London on February 15th, returning February 18th, under €200”
File operations (test in sandbox first):
“Download the PDF from [URL] and extract the key sections about pricing”
Monitoring (set and forget):
“Monitor the Anthropic blog and notify me when there’s a new post”
Week 3: Expansion (carefully)
# Add GitHub integration (read-only first!)
openclaw configure --section github
# When prompted, use a token with ONLY repo:read scope
# Add calendar (read-only first!)
openclaw configure --section calendar
# Grant only "See your calendars" permission
# Install useful skills
clawdhub install steipete/bird
clawdhub install steipete/goplaces
What to ask first
Good first prompts (high success rate):
“Search the web for [topic] and summarize the top 5 results”
“Find flights from [A] to [B] on [date] under [price]”
“Download the PDF from [URL] and extract the key points”
“Monitor [GitHub repo] and notify me when there’s a new release”
“What’s on my calendar for tomorrow?” (if calendar connected)
What NOT to ask (yet)
Anything involving payments or financial transactions
“Send this email for me” (start with “draft this email”)
Modifying files outside the sandbox directory
Multi-step workflows spanning 10+ actions
“Handle my inbox” (way too much access)
The bottom line
OpenClaw is real, occasionally useful, and genuinely novel in its architecture. It’s also immature, surrounded by hype that ranges from naive to deliberately manipulative, and a security incident waiting to happen if you’re careless.
The uncomfortable truth is that most people installing OpenClaw right now would be better served by Claude Code, a well-configured cron job, and a Zapier workflow. The messaging layer and scheduling capabilities are genuine improvements for specific use cases—but they’re incremental, not revolutionary.
For designers and product managers curious about where AI agents are heading, it’s worth experimenting with—on a sandboxed cloud instance with locked-down permissions. For anyone expecting to delegate serious work to it today, calibrate your expectations: you’re adopting early-stage infrastructure, not hiring a junior designer.
The Mac Mini frenzy will fade. The crypto grifters will move on to the next shiny thing. What will remain is a new category of AI tool—one that acts, not just responds. Whether that becomes transformative or remains a niche power-user feature depends on how quickly reliability improves and whether the community can build a security culture that matches the tool’s ambition.
I’ll keep running it on Azure, using it for research and web automation, quietly watching what the community builds while keeping my API budget on a short leash. The best uses probably haven’t been invented yet. But the worst installations are already out there, ports wide open, waiting to be exploited.
My setup summary:
Infrastructure: Azure Container Apps (~€65/month)
Reverse proxy: nginx with TLS + rate limiting
Primary LLM: Claude Sonnet 4.5
Backup LLM: GLM (when rate-limited)
Messaging: Telegram
Browser: Brave in isolated profile
Access granted: Sandboxed filesystem, read-only GitHub, read-only calendar
Access denied: Email send, full filesystem, payment methods
Command cheat sheet (print this)
┌─────────────────────────────────────────────────────────────────────┐
│ OPENCLAW COMMAND CHEAT SHEET │
├─────────────────────────────────────────────────────────────────────┤
│ INSTALLATION │
│ curl -fsSL https://openclaw.bot/install.sh | bash │
│ openclaw onboard --install-daemon │
├─────────────────────────────────────────────────────────────────────┤
│ DAILY USE │
│ openclaw status Quick health check │
│ openclaw health Detailed diagnostics │
│ openclaw dashboard Open web UI │
│ openclaw logs --follow Watch real-time logs │
│ openclaw status --all Full debug report │
├─────────────────────────────────────────────────────────────────────┤
│ GATEWAY │
│ openclaw gateway status Check if running │
│ openclaw gateway restart Restart daemon │
│ openclaw gateway stop Stop everything │
│ openclaw gateway --verbose Manual start (foreground) │
├─────────────────────────────────────────────────────────────────────┤
│ CHANNELS │
│ openclaw channels list Show connected channels │
│ openclaw channels login WhatsApp QR login │
│ openclaw pairing list <ch> Show pending pairings │
│ openclaw pairing approve Approve new device │
├─────────────────────────────────────────────────────────────────────┤
│ SECURITY │
│ openclaw security audit --deep Full security check │
│ openclaw configure --section web Add Brave Search API │
│ openclaw doctor Diagnose issues │
├─────────────────────────────────────────────────────────────────────┤
│ SKILLS │
│ openclaw skills list Show installed skills │
│ clawdhub install <name> Install from ClawdHub │
│ clawdhub update --all Update all skills │
├─────────────────────────────────────────────────────────────────────┤
│ IN-CHAT COMMANDS │
│ /status Model and token usage │
│ /new or /reset Clear session history │
│ /think <level> Set reasoning depth │
│ /verbose on|off Toggle detailed output │
├─────────────────────────────────────────────────────────────────────┤
│ TROUBLESHOOTING │
│ openclaw doctor Auto-diagnose │
│ openclaw logs --level error Show recent errors │
│ openclaw sessions clear Reset all sessions │
│ openclaw reset --all Nuclear option (full reset) │
└─────────────────────────────────────────────────────────────────────┘
Related links
🔗 OpenClaw Documentation
🔗 Cloudflare MoltWorker Setup
🔗 OpenClaw Security Guide
🔗 Moltbook – Social network for AI agents
🔗 Moltlist – Agent-to-agent marketplace
🔗 nginx Reverse Proxy Docs
🔗 GLM Coding Plan (discount link)













This is such a valuable breakdown of the real tradeoffs betwen local and cloud deployments! I've been running similar agent setups on Azure and you're absolutley right that the isolation benefits outweight the Mac Mini hype. The point about nginx rate limiting saving you from runaway costs really hit home, I had a similiar incident where a misconfigured polling interval cost me $80 overnight. Great practicle guide here.