The Delegation Decision Tree: When to Spawn Sub-Agents
The hardest skill in multi-agent orchestration isnβt building agents β itβs knowing when to use them. Delegate too much and you waste resources on overhead. Delegate too little and your CEO agent drowns in execution details.
This framework gives you a repeatable decision process.
The Decision Tree
Section titled βThe Decision Treeβββββββββββββββββββββββββββββββββββββββββ New Task Received βββββββββββββββββ¬βββββββββββββββββββββββ β Is this < 30 seconds? ββββββββ΄βββββββ YES NO β β Do it yourself Is this > 5 min β OR > 3 steps? β ββββββ΄βββββ β YES NO β β β β DELEGATE Could parts β β run in parallel? β β ββββββ΄βββββ β β YES NO β β β β β β PARALLEL SINGLE β β SPAWN DELEGATE β β β β βββββββββββ΄ββββββ΄βββββββββββ β Select model & timeoutTask Classification Guide
Section titled βTask Classification Guideββ Handle Directly (< 30 seconds)
Section titled ββ Handle Directly (< 30 seconds)βThese tasks have more spawn overhead than execution cost:
- Reading a config file
- Checking git status
- Looking up a value in memory
- Simple math or date calculations
- Answering from existing context
π Delegate to Specialist (> 5 min OR > 3 steps)
Section titled βπ Delegate to Specialist (> 5 min OR > 3 steps)βThese justify the spawn overhead:
- Building or modifying code
- Running test suites
- Web research with multiple queries
- Content drafting (articles, emails)
- Deployment pipelines
- Data processing or transformation
β‘ Parallel Spawn (independent subtasks)
Section titled ββ‘ Parallel Spawn (independent subtasks)βWhen tasks donβt depend on each otherβs output:
- Research competitors AND build landing page
- Write 3 articles simultaneously
- Deploy frontend AND run backend tests
- Scan email AND check calendar AND monitor Twitter
Model Selection Matrix
Section titled βModel Selection MatrixβMatch the model to the cognitive load (pricing as of April 2026 β verify at anthropic.com/pricing):
| Task Type | Model | Cost/M tokens | Timeout | Rationale |
|---|---|---|---|---|
| Code generation | Haiku | $0.80 in / $4 out | 15 min | Follows instructions, fast iteration |
| Code review | Sonnet | $3 in / $15 out | 10 min | Needs to understand intent + quality |
| Web scraping | Haiku | $0.80 in / $4 out | 5 min | Pattern matching, data extraction |
| Content writing | Sonnet | $3 in / $15 out | 3 min | Voice, nuance, creativity |
| Strategic analysis | Opus | $15 in / $75 out | 10 min | Complex multi-factor reasoning |
| Data transformation | Haiku | $0.80 in / $4 out | 5 min | Mechanical, well-defined rules |
| Monitoring/alerts | Haiku | $0.80 in / $4 out | 2 min | Simple checks, binary outcomes |
Timeout Discipline
Section titled βTimeout DisciplineβEvery spawn needs an explicit timeout. No exceptions.
Setting Timeouts
Section titled βSetting TimeoutsβTimeout = Expected duration Γ 2If a coding task should take 7 minutes, set a 15-minute timeout. This provides buffer for retries without allowing infinite loops.
When Timeout Fires
Section titled βWhen Timeout Firesβ- Kill the agent β donβt let it keep burning tokens
- Assess the situation β was it stuck, or was the task genuinely complex?
- Retry with adjustments:
- Break the task into smaller pieces
- Provide more specific instructions
- Try a different model
- Increase the timeout if the task was legitimately complex
Default Timeouts
Section titled βDefault Timeoutsβ| Category | Default | Max |
|---|---|---|
| Coding | 15 min | 30 min |
| Research | 5 min | 10 min |
| Writing | 3 min | 5 min |
| Monitoring | 2 min | 3 min |
| Deployment | 10 min | 20 min |
Context Management
Section titled βContext ManagementβWhat to Pass
Section titled βWhat to PassβGive sub-agents the minimum context needed for their task:
- β Specific task description
- β Relevant file paths
- β Technical constraints
- β Expected output format
What NOT to Pass
Section titled βWhat NOT to Passβ- β Full
MEMORY.md(security + token waste) - β Unrelated project context
- β Personal information unless needed
- β Full conversation history
# Good task prompt"Build a React form component in src/components/ContactForm.tsx.Fields: name (required), email (required, validated), message (textarea).Use Tailwind for styling. Run tests after building."
# Bad task prompt"Here's everything about my life, my projects, my memories...oh and also build a form."Concurrency Limits
Section titled βConcurrency LimitsβRunning too many agents simultaneously causes:
- API rate limiting
- Resource contention
- Difficult result synthesis
- Token budget blowouts
Recommended limits:
| Tier | Max Concurrent | Use Case |
|---|---|---|
| Conservative | 2 | Learning, budget-constrained |
| Standard | 5 | Normal operations |
| Aggressive | 10 | Time-critical, budget available |
The Complete Workflow
Section titled βThe Complete Workflowβ1. Task arrives2. Classify (< 30s? > 5 min? Parallel?)3. Select model + timeout4. Prepare minimal context5. Spawn agent(s)6. Wait for completion7. VERIFY results (run tests, check output)8. If broken β iterate (same agent, refined instructions)9. If passing β deliver to userThe verification step is non-negotiable. Read the orchestration guide for why skipping verification is the #1 orchestration failure mode.
About the author: JD Davenport builds AI agent systems at OpenClaw. Follow on LinkedIn for updates on building AI agents for business.