Documentation Index
Fetch the complete documentation index at: https://docs.nanny.run/llms.txt
Use this file to discover all available pages before exploring further.
The execution boundary
When you runnanny run, Nanny becomes the parent process of your agent.
It reads [start].cmd from nanny.toml, spawns it as a child, and owns the process lifecycle — it decides when the process lives and when it dies.
The moment any limit is crossed, Nanny kills the child process immediately — the process cannot catch, delay, or prevent the stop. An ExecutionStopped event is emitted with the reason, and Nanny exits with a non-zero status code.
Multi-agent governance
When multiple agents run in the same process — as in CrewAI, LangGraph, AutoGen, or any framework that orchestrates agents within a single Python or Rust runtime — the enforcement model above applies to all of them simultaneously. A singlenanny run governs the entire fleet.
Each agent activates its own named limit set via @agent("role"). Tool calls from any agent flow through Nanny’s enforcement layer. Each agent’s budget is tracked independently — hitting the analysis budget does not kill the reporter.
For cross-process and cross-machine enforcement, use the governance server.
What Nanny enforces
All three limits are enforced on every run:| Limit | Requirement | Behaviour |
|---|---|---|
timeout | None — works for any process | Killed when wall-clock time exceeds the configured value |
steps | Rust SDK or Python SDK | Killed when step count reaches the configured limit |
cost | Rust SDK or Python SDK | Killed when accumulated cost reaches the configured budget |
Passthrough mode
When running outsidenanny run, every macro becomes a no-op:
nanny run. The behaviour is identical either way.