Skip to content

Agents

An agent in zenflow is a named LLM configuration. You declare agents once at the top of a workflow and reference them from steps. Each step picks an agent; the executor runs the step's instructions through that agent's model, prompt, and tool set.

Agents are workflow-scoped. They are not standalone services; they exist for the duration of a RunFlow call.

Declaring agents

yaml
agents:
  planner:
    description: "Technical lead who creates implementation plans."
    prompt: "@prompts/planner.md"
    model: "bedrock/anthropic.claude-sonnet-4-6"
    tools: [bash, read, write]
    disallowedTools: [bash]
    maxTurns: 10
    temperature: 0.3

The agents: block is a map keyed by agent name. The keys are the strings steps put in agent:.

Field reference

FieldTypeRequiredDescription
descriptionstringYesHuman-readable role summary. Coordinator and the goal-decomposition LLM read this.
promptstringNoSystem prompt. Use @path/to/file.md to load from disk (relative to the workflow file).
modelstringNoModel identifier. Bare names (claude-sonnet-4-6, auto-routed) and provider-prefixed forms (bedrock/..., azure/..., google/..., vertex/...) work. The CLI does not accept the anthropic/ prefix; use bedrock/anthropic.claude-... or azure/claude-... instead.
toolsarray of stringsNoTool allowlist. Omit to allow every registered tool. List explicit names to restrict. Wildcards are not supported.
disallowedToolsarray of stringsNoTool denylist applied after the allowlist.
maxTurnsintegernoMaximum tool-call turns the agent may take in one step. Defaults to 50 when omitted. Hitting the cap returns AgentStatusTruncated and the step fails with ErrAgentTurnLimitExceeded.
temperaturenumberNoSampling temperature in [0, 2].
topPnumberNoNucleus sampling parameter in [0, 1].
resultSchemaobjectNoJSON Schema for the agent's structured result channel. See Structured output.

The full schema and validation rules live in YAML reference: Agent.

Step-level overrides

A step may override the agent's model:

yaml
steps:
  - id: think-hard
    agent: planner
    model: "claude-opus-4-6-thinking"
    instructions: "Reason step by step."

step.model takes precedence over agent.model for that step only. Other agent fields cannot be overridden per step; if you need a different prompt or tool set, declare another agent.

Tool filtering

The effective tool set for an agent is (allowlist) - (denylist):

  • tools omitted: every tool the orchestrator was constructed with via WithTools is available (zenflow's executor exposes the orchestrator-registered tools when no tools field is set).
  • tools: [a, b, c]: only those names.
  • disallowedTools: [bash]: removed from the resolved allowlist.

When the agent's resultSchema is set, the executor auto-injects a submit_result tool on top of the resolved set. send_message is auto-injected on every step runner that has a MessageRouter AND is not the coordinator itself (detection: presence of forward_to_agent in the runner's tool list marks the coordinator). Step runners that already have a send_message tool keep their own - no overwrite.

Implicit (default) agents

A step can omit agent:. Zenflow uses a default agent (the orchestrator-level WithModel LLM, with the orchestrator's WithTools catalogue). The minimal valid workflow is one step with no agents block at all:

yaml
name: minimal
steps:
  - id: greet
    instructions: "Say hello."

Use this for quick scripts. Once you need different roles, prompts, or tool subsets, declare named agents.

Provider matrix

zenflow runs on top of goai. Any provider goai supports works - zenflow does not bind to a specific vendor. The model identifier you put in model: is passed through to goai's resolver:

ProviderIdentifier examplesgoai entry point
Googlegemini-3-pro-preview, gemini-2.5-progoai/provider/google
AWS Bedrockanthropic.claude-sonnet-4-6, minimax.minimax-m2.5goai/provider/bedrock
Azure (AI Services)DeepSeek-V3.2, grok-2, Llama-3.1-405Bgoai/provider/azure
Azure (OpenAI)gpt-5, gpt-5.3-codex, o3-minigoai/provider/azure
Azure (Anthropic)claude-sonnet-4-6goai/provider/azure
Vertex AIclaude-3-5-sonnet@vertexgoai/provider/vertex
Anthropic directclaude-sonnet-4-6 (bare names dispatched via library code)goai/provider/anthropic (library only) -- use zenflow.WithModel(anthropic.Chat("claude-sonnet-4-6")). The CLI binary does not auto-route to direct Anthropic; it routes Claude models to Azure/Bedrock.
OpenAI directgpt-4o, o3goai/provider/openai (library only)
OpenRouteruse zenflow.WithModel(openrouter.Chat(...))goai/provider/openrouter (library only)
Cerebrasuse zenflow.WithModel(cerebras.Chat(...))goai/provider/cerebras (library only)
Cloudflare Workers AIuse zenflow.WithModel(cloudflare.Chat(...))goai/provider/cloudflare (library only)
Cohereuse zenflow.WithModel(cohere.Chat(...))goai/provider/cohere (library only)
DeepInfrause zenflow.WithModel(deepinfra.Chat(...))goai/provider/deepinfra (library only)
DeepSeekuse zenflow.WithModel(deepseek.Chat(...))goai/provider/deepseek (library only)
Fireworksuse zenflow.WithModel(fireworks.Chat(...))goai/provider/fireworks (library only)
FPT Clouduse zenflow.WithModel(fptcloud.Chat(...))goai/provider/fptcloud (library only)
Groquse zenflow.WithModel(groq.Chat(...))goai/provider/groq (library only)
MiniMaxuse zenflow.WithModel(minimax.Chat(...))goai/provider/minimax (library only)
Mistraluse zenflow.WithModel(mistral.Chat(...))goai/provider/mistral (library only)
NVIDIA NIMuse zenflow.WithModel(nvidia.Chat(...))goai/provider/nvidia (library only)
Ollamause zenflow.WithModel(ollama.Chat(...))goai/provider/ollama (library only)
Perplexityuse zenflow.WithModel(perplexity.Chat(...))goai/provider/perplexity (library only)
RunPoduse zenflow.WithModel(runpod.Chat(...))goai/provider/runpod (library only)
Together AIuse zenflow.WithModel(together.Chat(...))goai/provider/together (library only)
vLLMuse zenflow.WithModel(vllm.Chat(...))goai/provider/vllm (library only)
xAIuse zenflow.WithModel(xai.Chat(...))goai/provider/xai (library only)
OpenAI-compatible (generic)use zenflow.WithModel(compat.Chat(...))goai/provider/compat (library only)

When you build the orchestrator, you pass one provider.LanguageModel via WithModel. The model your YAML asks for (per-agent model: or per-step model:) is resolved through the goai stack to a provider call. Goai handles auth, request shape, streaming, structured tool calls, and retries.

The CLI binary picks a provider from --model and the environment automatically. The library expects you to wire a provider.LanguageModel of your choice. See goai docs for provider setup.

maxTurns and the conversation loop

maxTurns bounds how many round trips the agent can make against the LLM in one step. One "turn" = one LLM response (which may include tool calls). The loop terminates when:

  1. The LLM returns text with no tool calls (natural exit).
  2. The agent's resultSchema triggers a successful submit_result call (terminates immediately on the validating call).
  3. maxTurns is reached.
  4. The orchestrator's context is cancelled (timeout, abort).

If maxTurns is reached without a successful submit_result call (and resultSchema was set), the step fails with "resultSchema defined but submit_result never called". Without resultSchema, hitting maxTurns is allowed and the step's content is whatever was accumulated.

Temperature and topP

Both flow through goai to the underlying provider. They are advisory - providers that do not support a knob silently ignore it. For deterministic-ish agents (judges, validators, structured extractors), set temperature: 0. For creative-ish agents (planners, debaters), keep the default.

Released under the Apache 2.0 License.