Skip to content

Brood (Agent Clusters)

brood.yaml is the configuration file for multi-hive agent clusters. Think of it as Docker Compose for AI agents — define your agents, their roles, providers, and coordination protocols in one file.

name: my-project
provider: cerebras/llama-3.3-70b
hives:
main:
acp: spec.acp.yaml
agents:
- role: developer
type: worker
tools: [read_file, write_file, shell, git_status, git_commit]
- role: reviewer
type: drone
FieldDescription
nameProject name
providerDefault LLM provider (provider/model format)
hivesMap of hive names to hive configs
FieldAliasesDescription
acpspec, protocolACP protocol file path, URL, or inline object
agentsworkersArray of agent definitions
danceslogicDance file path (ESM module with tool handlers)
variablesVariable overrides for the protocol
workspacereal (default) or memfs (in-memory filesystem)
pluginsAdditional incubator plugins to load
agents:
- role: developer # Must match a role in the ACP spec
type: worker # worker | drone | claude | mock
provider: anthropic/claude-sonnet # Override default provider
tools: # Filter available propolis tools
- read_file
- write_file
- shell
wake_on: # Reactive: sleep until event
types: [task.assigned]
timeout: 300000
FieldDescription
roleRole name from the ACP protocol
typeAgent execution type
providerLLM provider override
toolsTool filter (only these propolis tools available)
wake_onEvent types that wake this agent

In-process agent with propolis environment tools. Full filesystem access.

- role: developer
type: worker
tools: [read_file, write_file, patch_file, shell, git_status, git_diff, git_commit]

Lightweight MCP subprocess. ACP coordination only, no filesystem.

- role: reviewer
type: drone

Claude Code instance via the Agent SDK. Full Claude toolset.

- role: architect
type: claude
provider: anthropic/claude-opus

Testing agent with action sequences. No LLM calls.

- role: test_bot
type: mock
behavior:
- action: publish
type: greeting
data: { message: "Hello from mock!" }
- action: set_state
key: status
value: $last.type # Template: resolves to previous result
# Full format
provider: cerebras/llama-3.3-70b
# Alias
provider: fast # → cerebras/llama-3.3-70b
provider: smart # → openai/gpt-4o
provider: local # → ollama/llama3.3
AliasProviderDefault Model
fastCerebrasllama-3.3-70b
smartOpenAIgpt-4o
localOllamallama3.3

Agents work on the actual filesystem. Shell commands execute directly.

Agents get an in-memory filesystem. File operations work normally, but shell/git/PTY return “not available in memfs mode.” Useful for edge deployment or testing.

hives:
main:
workspace: memfs
acp: spec.acp.yaml
Terminal window
# Start from brood.yaml in current directory
wgl up
# Or specify the file
wgl up --brood=path/to/brood.yaml
# Stop all agents
wgl down

wgl up starts an incubator instance, loads the protocol, and spawns agents. The orchestrator waits for a start event before spawning agents — publish it from your UI or script.

name: full-stack
provider: cerebras/llama-3.3-70b
hives:
frontend:
acp: frontend-spec.acp.yaml
dances: frontend.js
agents:
- role: developer
type: worker
tools: [read_file, write_file, shell]
- role: designer
type: drone
backend:
acp: backend-spec.acp.yaml
dances: backend.js
agents:
- role: developer
type: worker
tools: [read_file, write_file, shell, git_commit]
- role: tester
type: worker
tools: [read_file, shell]

Each hive gets its own incubator instance with isolated state, events, and claims. Cross-hive communication uses ACP topics.