🧠 Agents in a Box β€” Masterclass Resource

Cognitive Memory

Give your AI agent a persistent brain. It learns from every session, remembers what works, and shares knowledge across your entire AI workforce β€” with you in control.

Your Agent Forgets Everything

❌ Without Cognitive Memory

Every new chat starts from zero. Monday your agent learns your API returns dates in YYYY-DD-MM format. Wednesday it forgets. Friday, it fails on dates again. You're the human duct tape repeating the same context every session.

βœ… With Cognitive Memory

Your agent submits discoveries as memory entries. You approve what sticks on the dashboard. Next session, the agent loads its approved memories and never makes the same mistake twice. Multiple agents share the same brain β€” one learns a fix, all benefit.

Three Memory Flows

1
πŸ”

Agent Discovers

Finds an API quirk, a pattern, or a fix during work β†’ submits to memory

2
πŸ‘€

Owner Approves

You review the submission on the dashboard β†’ approve or reject

3
♾️

Persists Forever

Approved memories load automatically in every future session for every agent

πŸ”‘ The Key Insight

OpenClaw agents already have file-based memory (MEMORY.md). What Cognitive Memory adds is human approval, categorization, cross-agent sharing, and dashboard visibility. You see what your agents learned, you control what sticks, and every agent on your team shares the same knowledge base.

Four Types of Memory

Cognitive Memory β€” Agent System Prompt

Add this to your agent's system prompt or OpenClaw SKILL.md. Works with any agent that can make HTTP calls to the ClawBuddy API.

cognitive-memory.md
# Cognitive Memory β€” Persistent Knowledge System You have access to a persistent memory system through ClawBuddy. Use it to remember important discoveries, patterns, and rules across sessions. ## Connection All memory API calls use this pattern: ```bash curl -sS -X POST "${CLAWBUDDY_API_URL}/functions/v1/ai-tasks" \ -H "x-webhook-secret: ${CLAWBUDDY_WEBHOOK_SECRET}" \ -H "Content-Type: application/json" \ -d '{...}' ``` ## When to Submit a Memory Submit a memory when you discover ANY of these: 1. **API quirks** β€” an endpoint behaves unexpectedly, requires specific field names, has undocumented limits, or returns surprising formats 2. **User preferences** β€” the owner corrects you, states a preference, or says "always/never do X" 3. **Environment facts** β€” file paths, project structure, tool versions, deployment targets, credentials locations (never the credentials themselves) 4. **Patterns that work** β€” a code pattern, prompt structure, or approach that solved a recurring problem 5. **Patterns that fail** β€” something that looks right but breaks, so future sessions don't repeat the mistake ## How to Submit After discovering something worth remembering: ```json { "request_type": "memory", "action": "submit", "content": "Clear, specific description of what was learned. Include the EXACT pattern β€” not vague summaries. Example: 'delete_data API requires BOTH app_id AND data_id in the payload. Without app_id it returns success but silently does nothing.'", "category": "technical | preference | project | process", "agent_name": "YOUR_AGENT_NAME" } ``` ## Categories | Category | Use for | Example | |----------|---------|---------| | technical | API quirks, code patterns, tool behavior | "Supabase list_data caps at 1000 rows" | | preference | User's stated preferences, style choices | "Owner prefers Tailwind over inline styles" | | project | File locations, architecture, dependencies | "Edge functions are in cosmic-flow-51/supabase/functions/" | | process | Workflow rules, deployment steps, review gates | "Never deploy without running typecheck first" | ## Rules 1. **Be specific, not vague.** Bad: "The API has some quirks with deletion" Good: "delete_data requires both app_id and data_id β€” without app_id it returns {success: true} but does nothing" 2. **One memory per discovery.** Don't bundle 5 learnings into one entry. Submit them separately so each can be approved or rejected independently. 3. **Never store credentials.** Store WHERE credentials are ("API key is in .env as STRIPE_KEY") but NEVER the actual values. 4. **Submit after every build.** At the end of a build session, review what you learned and submit the most valuable discoveries. Aim for 1-3 memories per build. 5. **Submit immediately on user correction.** When the owner says "No, do it this way" β€” submit that correction as a memory right then, don't wait until the end. 6. **Memories require approval.** Your submission goes to the dashboard where the owner reviews and approves it. Until approved, it is not persistent. This is intentional β€” the owner controls what their agents remember permanently. ## Reading Memories At the start of every session, your context should include approved memories. If you need to check for specific knowledge mid-session: ```json { "request_type": "memory", "action": "list", "category": "technical", "agent_name": "YOUR_AGENT_NAME" } ``` ## Post-Build Memory Extraction After completing a build, ask yourself: - Did I hit any errors that took more than one attempt to fix? - Did I discover something about the codebase that wasn't documented? - Did the owner correct my approach at any point? - Did I find a pattern that would save time on similar future tasks? For each "yes," submit a memory entry.

🛠️ Starter vs Production

This prompt creates a V1 scaffold. Production requires wiring real data sources, QA passes, error handling, retry logic, and safety gates. Treat the output as a working prototype β€” not a ship-ready system.