Two AI systems working together. One dispatches. One builds. A dashboard tracks every phase. This is autonomous multi-agent operations.
OpenClaw excels at browser tasks, automation, computer use, and always-on operations. Claude Code excels at code generation, system building, and complex reasoning. Together they cover every axis of autonomous work.
Always-on executor. Runs cron jobs, automated feeds, browser tasks, and computer use. Lives inside OpenClaw with full GUI access. Dispatches work to Claude Code when code needs to be built.
Session-based builder. Generates code, deploys edge functions, writes migrations, runs tests. Picks up dispatched work from the queue automatically. Reports results back to the dashboard.
OpenClaw creates a queue item. The poller picks it up. An 8-phase handler orchestrates everything. The dashboard animates in real time. Telegram sends alerts to your phone. You don't touch a thing.
The handler posts all 8 phase transitions, not the dispatched agent. Dispatched Claude sessions can't be trusted to make API calls — they're sandboxed. So the handler orchestrates everything: phase dots, Telegram alerts, Build History, Kanban tasks.
A background task posts work_progress to the database every 3 minutes. The dashboard knows the agent is alive. No heartbeat for 10 minutes = STALE badge. 15 minutes = FAILED. This prevents ghost runs.
Every dispatch must include 4 sections: Task (what to build), Requirements (detailed spec), Environment (working dir, env vars, dependencies), and Deliverables (expected outputs including README.md). No guessing. No ambiguity.
OpenClaw and Claude Code message each other directly via the agent_comms system. Message types: comment, question, status_request, status_response, directive. Thread-based with reply support. Session-start hooks auto-check for unread messages.
OpenClaw creates a pending_tasks item. Claude Code auto-picks it up. The 8-phase pipeline handles everything. This is the production pattern — it runs when you're not at your desk.
PRIMARY PATTERNOpenClaw completes task A, then hands off to Claude Code for task B via agent_comms. One finishes before the other starts. Good for dependent tasks.
Both agents work on different parts simultaneously. OpenClaw handles research while Claude Code builds. Results merge on the dashboard. Maximum throughput.
OpenClaw tries a task, hits a wall, escalates to Claude Code via queue dispatch with context about what failed. The more capable agent takes over with full history.
The Command Center page has three panels that update in real time via Supabase real-time subscriptions. No refresh needed.
Live phase animation with 8 dots that light up green as each phase completes. Progress bar fills from 0% to 100%. Shows current action text, elapsed time, and stale/failed detection. When idle, panel is empty. When a build fires, it comes alive.
Every completed build shows as a row with 8 green dots (one per phase). Displays duration, turn count, and outcome. Your audit trail for every autonomous build.
Timeline view of every autonomous run. Click a row to see the full phase-by-phase breakdown with timestamps, actions, and results.
This prompt sets up the queue dispatch protocol, agent communication, and the 8-phase pipeline handler. Give this to your Claude Code agent so it knows how to pick up dispatched work and report results.
Most AI setups are chatbots. This is a structured build system with queue-based dispatch, handler-driven phases, heartbeat monitoring, and a dashboard that shows you everything in real time. OpenClaw dispatches. Claude Code builds. The Command Center tracks every phase. Telegram keeps you updated. You don't have to be there for any of it.
Don't start from scratch. Use this structured rework dispatch to give the executor agent full context.
{
"request_type": "queue",
"action": "create",
"task_type": "build",
"action_name": "Fix: <describe what broke>",
"priority": "high",
"payload": {
"prompt": "## Task\n<One sentence: what to fix>\n\n## Current State (Broken)\n<What's failing, error messages, screenshots>\n\n## Requirements\n<What the fixed version should do>\n\n## Environment\n- Working directory: /path/to/app/\n- Existing codebase — read README.md first\n- Error log: <paste or reference>\n\n## Deliverables\n1. Fixed feature, passing validation\n2. Updated README.md\n3. ClawBuddy logs showing fix\n4. HTML report with before/after",
"dispatcher": "<openclaw_agent_name>",
"task_title": "Fix: <describe what broke>",
"max_turns": 100
}
}
This prompt creates a V1 scaffold. Production requires wiring real data sources, QA passes, error handling, retry logic, and safety gates. Treat the output as a working prototype — not a ship-ready system.