Skip to content
Subscribe

Case Study: 15 Tasks in 90 Minutes

What does it look like when a CEO agent goes full-throttle? On March 27, 2026, a single orchestration session built an entire AI operating system infrastructure — 15 parallel tasks, 6 phases, ~90 minutes wall-clock time.

This is that story.

Build a complete agent organization from scratch:

  • Stable foundation (cron jobs, config, projects)
  • C-suite agent workspaces (CEO, COO, CTO, CMO, CIO)
  • Knowledge hub with 13+ documentation pages
  • 5 new skills (YouTube, LinkedIn-PDF, HN, arXiv, infographics)
  • Dashboard updated and deployed
  • Notification + cost infrastructure
  • Organizational governance charter

One human. One CEO agent. Many specialists.


Before building anything new, the CEO agent assessed damage.

Problem: 18 broken cron jobs were firing every few minutes — failed agents retrying indefinitely, hammering the API, polluting logs.

Action taken:

  • Audited all active crons via openclaw cron list
  • Disabled 18 broken/stuck jobs in a single pass
  • Cleaned projects.json — removed orphaned entries, corrected status fields
  • Verified clean state before proceeding

Lesson: Stabilization before expansion. Launching new agents into an unstable environment multiplies chaos. The 10 minutes spent here saved 60 minutes of debugging downstream.


Spawned: 2 parallel sub-agents

Task: Build workspace scaffolding for 5 C-suite roles:

  • ~/clawd/agents/CEO/ — strategic orchestration
  • ~/clawd/agents/COO/ — operations and scheduling
  • ~/clawd/agents/CTO/ — technical systems
  • ~/clawd/agents/CMO/ — content and marketing
  • ~/clawd/agents/CIO/ — intelligence and research

Each workspace received:

  • WORKSPACE.md — role charter, responsibilities, tools
  • config.json — model assignments, timeout rules, delegation thresholds
  • Cron job entries (9 total) for scheduled autonomous work

Model routing:

  • Routine scheduled tasks → claude-haiku-4-5 (fast, cheap)
  • Content drafting → claude-sonnet-4-6 (quality prose)
  • Strategic decisions → claude-opus-4-6 (full reasoning)

Outcome: All 5 workspaces created, 9 crons registered, config verified in ~15 minutes total.


Phase 2 — Documentation Sprint (~25 min)

Section titled “Phase 2 — Documentation Sprint (~25 min)”

Spawned: 4 parallel sub-agents

This was the largest parallel burst of the session.

Tasks running simultaneously:

  1. 9 architecture pages (C-suite overviews, role deep-dives)
  2. 3 framework pages (LLM routing, delegation, ARDs)
  3. 5 LinkedIn posts (drafted from documentation content)
  4. Sidebar config updated in astro.config.mjs

Why this worked: Each task was fully independent. No agent needed to wait for another. The CEO agent queued all 4 and monitored completion via push-based callbacks.

Failure modes encountered:

  • One sub-agent returned a gateway timeout after 8 minutes (hit the 10-minute wall-clock limit)
  • Re-fired with identical task parameters → completed on second attempt in 4 minutes
  • Net cost: 4 minutes of rework, not a full restart

Total output: 13 documentation pages, 5 LinkedIn posts, updated navigation. All passing Astro build checks.


Spawned: 3 parallel sub-agents (skill creation)

New skills built:

SkillPurposeModel
youtubeFetch transcripts, summarize videos, extract insightsHaiku
linkedin-pdfParse LinkedIn profiles, extract structured dataSonnet
hackernewsScan HN front page, filter by topic, summarizeHaiku
arxivSearch papers, pull abstracts, generate summariesHaiku
infographicsGenerate visual slides from markdown contentSonnet

Each skill followed the standard SKILL.md format: description, CLI reference, example usage, error handling.

Stuck agent incident: One sub-agent entered a poll loop — checking its own status every 30 seconds. The CEO agent detected no output after 12 minutes (expected: 5 minutes) and sent a terminate signal. The root cause: the agent was waiting for a tool approval that never came.

Fix applied: Added explicit --non-interactive flag convention to skill-creation prompts. Agents shouldn’t pause for approvals mid-task.


Handled directly by CEO agent (not delegated — too fast)

Updated projects.json in nerve-center-v2:

  • Added 6 new entries (agent workspaces, skills, hub pages)
  • Moved 3 completed items from activecompleted
  • Deployed to Vercel: npx vercel --prod --yes

Deploy time: 47 seconds. Zero manual intervention.


Spawned: 1 sub-agent

Tasks:

  • Notification queue skeleton (~/clawd/queue/notifications.json)
  • Cost monitoring script (~/clawd/scripts/cost-monitor.sh)
  • Daily cost cap logic: alert at $2, hard-stop at $5

Rationale: As agent count scales, cost visibility becomes critical. A single runaway cron can burn $10/day. The monitor runs every 15 minutes and posts to Telegram if thresholds are hit.


Phase 6 — Organizational Charter (~5 min)

Section titled “Phase 6 — Organizational Charter (~5 min)”

Handled directly by CEO agent

Wrote ~/clawd/ORGANIZATION.md — a governance document defining:

  • Agent hierarchy and reporting structure
  • Escalation protocols (when sub-agents notify humans)
  • Conflict resolution (two agents want the same resource)
  • Human-in-the-loop checkpoints

15 Tasks

Executed across 6 phases, most in parallel

~90 Minutes

Wall-clock time from start to verified completion

11 Sub-Agents

Spawned and managed by the CEO agent

2 Failure Events

1 gateway timeout, 1 stuck agent — both recovered automatically


The key insight: identify dependency chains, then flatten them.

Phase 0 (serial — must stabilize first)
Phase 1 (parallel: 2 agents)
Phase 2 (parallel: 4 agents — biggest burst)
Phases 3–4–5 (parallel: 4 agents)
Phase 6 (serial — synthesis, CEO handles directly)

Tasks within a phase had no dependencies on each other. Tasks between phases had hard ordering constraints (can’t document what doesn’t exist yet).

The CEO agent never ran more than 5 sub-agents simultaneously — this is the concurrency limit to avoid context saturation and rate limit collisions.


PhaseModel UsedEstimated Cost
StabilizationOpus (analysis)~$0.15
C-Suite buildHaiku (file writes)~$0.08
DocumentationSonnet (content)~$0.42
SkillsHaiku + Sonnet~$0.22
DashboardCEO direct (Opus)~$0.05
InfrastructureHaiku~$0.06
CharterCEO direct (Opus)~$0.04
Total~$1.02

$1 for an entire AI operating system infrastructure. The ROI math is straightforward.


  • Symptom: Sub-agent returns no output after 10 minutes
  • Detection: CEO agent monitors expected completion time
  • Recovery: Re-fire with identical parameters (tasks must be idempotent)
  • Prevention: Break tasks >8 min into smaller chunks
  • Symptom: Agent active but producing no output, CPU flat
  • Detection: No progress after 2× expected time
  • Recovery: Terminate + re-fire with --non-interactive flag
  • Prevention: Never leave agents waiting for interactive input
  • Symptom: Sub-agent starts hallucinating file paths or ignoring instructions
  • Detection: Output doesn’t match expected format
  • Recovery: Terminate, re-fire with leaner prompt (strip context fat)
  • Prevention: Pass only what’s needed. Don’t send MEMORY.md to sub-agents.

  1. Stabilize before you build. Broken crons + new agents = compounding chaos.

  2. Parallel is always faster — until it isn’t. 4 agents working for 5 minutes beats 1 agent for 20 minutes. But 6 agents competing for rate limits can be slower than 4.

  3. Idempotent tasks are free. When re-firing is safe, timeouts are just delays, not disasters.

  4. The CEO agent should stay lean. Every minute the orchestrator spends writing files is a minute it’s not monitoring children. Delegate ruthlessly.

  5. Document the failures too. The gateway timeout and stuck agent aren’t embarrassing — they’re expected. The system’s value is in how it recovers, not that it never fails.

  6. Cost monitoring from day one. Adding the cost monitor in Phase 5 (not Phase 0) was a mistake. It should be infrastructure, not afterthought.


A well-orchestrated agent system doesn’t just automate tasks — it compresses calendar time. Work that would take a human developer 2-3 days (workspace setup, documentation, skills, config, deployment) happened in 90 minutes.

The bottleneck isn’t compute. It’s orchestration quality: knowing what to parallelize, how to delegate, and when to intervene.

That’s the CEO agent’s job.


This case study documents a real session run on March 27, 2026. Times and costs are approximate.