The execution boundary
When you runnanny run, Nanny becomes the parent process of your agent.
It reads [start].cmd from nanny.toml, spawns it as a child, and owns the process lifecycle — it decides when the process lives and when it dies.
The moment any limit is crossed, Nanny kills the child process immediately — the process cannot catch, delay, or prevent the stop. An ExecutionStopped event is emitted with the reason, and Nanny exits with a non-zero status code.
Multi-agent governance
When multiple agents run in the same process — as in CrewAI, LangGraph, AutoGen, or any framework that orchestrates agents within a single Python or Rust runtime — the enforcement model above applies to all of them simultaneously. A singlenanny run governs the entire fleet.
Each agent activates its own named limit set via @agent("role"). Tool calls from any agent flow through the same enforcement bridge. Each agent’s budget is tracked independently — hitting the analysis budget does not kill the reporter.
Scope: All agents in the diagram run within one process. This covers any framework that orchestrates agents in a single runtime. Cross-process and cross-machine fleet enforcement is the v0.2.0 cloud layer.
What Nanny enforces
All three limits are enforced on every run:| Limit | Requirement | Behaviour |
|---|---|---|
timeout | None — works for any process | Killed when wall-clock time exceeds the configured value |
steps | Rust SDK or Python SDK | Killed when step count reaches the configured limit |
cost | Rust SDK or Python SDK | Killed when accumulated cost reaches the configured budget |
Passthrough mode
When running outsidenanny run, every macro becomes a no-op:
nanny run. The behaviour is identical either way.