User
Write something
Pinned
OpenClaw: Extracted Prompts (Generalized)
I just watched the video below and Matthew Berman details his OpenClaw CRM system. Here are the prompts he used to create his system Enjoy 22 copy/paste-ready prompts for building your own AI agent system. Each prompt builds a functional system or implements a proven best practice you can hand to an AI coding assistant. Replace placeholders like <your-workspace>, <your-messaging-platform>, and <your-model> with your own values. The repo is located here https://gist.github.com/mberman84
Pinned
Skills
Since we are the OC Builders community, I want to build a skill for the community what do you all want me to create. After you vote, post a comment so I can capture the details of what you want. At a minimum provide the following information As a persona I would like to do x, for reason y and the desire outcome y As a business owner I want to reduce the time I spend on daily social posting, so I can focus on making more sales calls. When I make sales calls I close 20% and make an extra $10,000 /mo instead of creating social media post that don't perform well. I post to Facebook, X, Reddit and Linkedin to direct potential customers to my funnel. I've made a post a day for each social and I don't get any organic traffic to my funnel. I spend an 2 hours a day making posts.
Poll
18 members have voted
Pinned
Welcome! Introduce Yourself + How You Plan To Use Open Claw?🎉
I'm as excited as you. I will be working hard over the next weeks to build out this community. I will unlock access to my Open Claw Builder -Custom GPT to members that reach level 2 Who will be the first to post?
AGENT -> The Linux Guru of DevOps
You are my senior Linux automation, DevOps, and AI infrastructure engineer. Goal: Audit my Ubuntu server automation and modernize it by migrating cron jobs and legacy automation to systemd services and timers wherever appropriate. This server runs Docker, OpenClaw multi-agent bots, and per-bot SQLite memory. The priority is stability, resilience, and long-term automation. Step 1: Collect current automation and scheduled tasks Give me the exact terminal commands to list ALL of the following: 1) User cron jobs 2) Root cron jobs 3) System-wide cron configuration 4) All files in: /etc/cron.d /etc/cron.daily /etc/cron.hourly /etc/cron.weekly /etc/cron.monthly 5) Anacron configuration 6) All active and installed systemd timers 7) All systemd services related to scripts, automation, or monitoring 8) Any legacy init scripts or startup automation 9) Docker-related automation or health checks 10) GitHub pull or update automation 11) Backup scripts related to SQLite or logs Explain briefly what each command shows. Step 2: I will paste outputs After I paste the results: • Do not guess or invent missing jobs. • Only analyze what I provide. Step 3: Analysis and migration plan Create a table for every automation job with the following: - Job source (cron, timer, script, service) - Verbatim entry - What it does in plain English - Frequency or schedule - Criticality (Low, Medium, High) - Good systemd candidate? (Yes or No) - Best replacement type: • systemd timer (interval) • systemd calendar timer • systemd service (continuous) • keep cron - Reason for recommendation - Migration risks and gotchas: • PATH and environment differences • Working directory • User vs root context • Docker dependencies • Network readiness • Database locking or SQLite concurrency • Logging and observability • Boot order and service dependencies Step 4: Generate systemd replacements For each job marked Yes: Create: 1) A production-grade systemd service 2) A matching systemd timer when relevant
Frontier vs Distilled Models: What Breaks First in Real AI Agent Workflows
Most AI model comparisons are still stuck in benchmark-score land That’s fine for toy prompts. It breaks the moment you ask a model to do long, messy, tool-using work in production. Big takeaway: model provenance is a capability issue, not just an ethics issue. A model can look strong in short-form chat and still fail hard when context shifts, tools error, or tasks run for hours. What matters in production: 1. Reasoning breadth vs mimicry Distilled models can imitate patterns well, but often struggle with novel, off-script task chains. 2. Recovery behavior How does the model respond when APIs fail, schemas change, or context conflicts? 3. Long-horizon stability Can it stay coherent over 30+ turns and multi-step objectives? 4. Generalization under pressure Can it solve edge cases outside familiar benchmark patterns? Quick framework before choosing a model: • Task scope (how complex/long is the workflow?) • Failure tolerance (what mistakes are acceptable?) • Recovery requirements (how much auto-repair do you need?) • Human intervention budget (how often can someone step in?) Action Item (use Skool “Add Action”): Run a 60-minute off-manifold probe on one workflow: • break one assumption on purpose • track completion rate, recovery latency, hallucination rate, and interventions • post your results in the community Discussion question: What’s your current “works in demo, breaks in production” failure pattern — and which model is causing it? Join us if you’re building real AI systems with real constraints:
0
0
1-30 of 64
OpenClawBuilders/AI Automation
skool.com/openclawbuilders
Master OpenClaw/Moltbot/Clawd: From confused install to secured automated workflows in 30 days
Leaderboard (30-day)
Powered by