Experts
Experts are the core building block of Perstack β modular micro-agents designed for reuse.
The term βExpertβ is familiar in AI (e.g., Mixture of Experts), but here it means something specific: a specialist component with a single, well-defined role.
Why Experts?
Traditional agent development produces monolithic agents optimized for specific use cases. They work, but they donβt transfer. You canβt take a βresearch agentβ from one project and reuse it in another without significant rework.
Experts solve this by inverting the design:
| Traditional Agent | Expert |
|---|---|
| Represents a user | Serves an application |
| Does many things | Does one thing well |
| Application-specific | Purpose-specific, context-independent |
| Hard to reuse | Designed for reuse |
An agent represents a user β it acts on their behalf across many tasks. An Expert is a specialist component β it helps an application achieve a specific goal.
This distinction matters. When you build an Expert, youβre not building an application. Youβre building a reusable capability that any application can leverage.
What is an Expert?
An Expert is defined by three things:
1. Purpose (description)
A clear statement of what the Expert does. Unlike instruction (which is private to the Expert), description is exposed to other Experts as a tool description when delegating.
When Expert A can delegate to Expert B, the runtime presents Expert B as a callable tool to Expert A β with description as the toolβs description. This is how Expert A decides:
- Which delegate to call
- What query to write
A good description tells potential callers what this Expert can do, when to use it, and what to include in the query.
[experts."code-reviewer"]
description = """
Reviews TypeScript code for type safety, error handling, and security issues.
Provide the file path to review. Returns actionable feedback with code examples.
"""2. Domain knowledge (instruction)
The knowledge that transforms a general-purpose LLM into a specialist. This includes:
- What the Expert is expected to achieve
- Domain-specific concepts, rules, and constraints
- Completion criteria and priority tradeoffs
- Guidelines for using assigned skills
instruction = """
You are a TypeScript code reviewer for production systems.
Review criteria:
- Type safety: No `any` types, all types explicitly defined
- Error handling: All errors must use codes from `error-codes.ts`
- Security: Flag even minor risks
Provide actionable feedback with code examples.
"""3. Capabilities (skills, delegates)
What the Expert can do:
- Skills: Tools available through MCP (file access, web search, APIs)
- Delegates: Other Experts this Expert can call
delegates = ["security-analyst"]
[experts."code-reviewer".skills."static-analysis"]
type = "mcpStdioSkill"
command = "npx"
packageName = "@eslint/mcp"How Experts work
Execution model
When you run an Expert:
- The runtime creates a Job and starts the first Run with your Expert (the Coordinator)
- The instruction becomes the system prompt (with runtime meta-instructionsΒ )
- Your query becomes the user message
- The LLM reasons and calls tools (skills) as needed
- Each step produces a checkpoint β a complete snapshot of the Runβs state
The runtime manages the execution loop. The Expert definition declares what to achieve; the runtime handles how.
Delegation
Experts collaborate through delegation, not shared context. Each delegation creates a new Run within the same Job.
Job
β
βββ Run 1: Expert A (Coordinator)
β β
β ββ sees delegates as tools
β β (description β tool description)
β β
β ββ calls delegate ββββββββββββββββββββββ
β β (writes query) β
β β β
β β [Run 1 paused] β
β β βΌ
β β βββ Run 2: Expert B βββ
β β β starts fresh β
β β β (empty history) β
β β β (own instruction) β
β β β β β
β β β ββ executes β
β β β β β
β β β ββ completes β
β β β β β
β β βββββββββΌββββββββββββββ
β β β
β ββ resumes βββββββββββββββββββββββββββββ
β β (receives run result only)
β βΌContext is never shared between Experts. The delegate receives only the query β no message history, no parent context. This is a security boundary, not a limitation. See Why context isolation matters.
Parallel delegation
When the LLM calls multiple delegate tools in a single response, the runtime executes them in parallel:
Job
β
βββ Run 1: Expert A (Coordinator)
β β
β ββ calls Expert B and Expert C βββββββββ¬ββββββββββββββ
β β (in single response) β β
β β β β
β β [Run 1 paused] βΌ βΌ
β β βββ Run 2 βββ βββ Run 3 βββ
β β β Expert B β β Expert C β
β β β β β β
β β β executes β β executes β
β β β in β β in β
β β β parallel β β parallel β
β β β β β β
β β βββββββ¬ββββββ βββββββ¬ββββββ
β β β β
β ββ resumes βββββββββββββββββββββββββββ΄βββββββββββββββ
β β (receives all results)
β βΌBenefits:
- Performance: Independent tasks run concurrently
- Natural: LLM decides when to parallelize based on task requirements
- No configuration: Automatic when multiple delegates called together
Note the asymmetry: Expert A sees Expert Bβs description (public interface), but never its instruction (private implementation). This is what makes Experts composable β the caller only needs to know what a delegate does, not how it does it.
Key design decisions:
| Aspect | Design | Rationale |
|---|---|---|
| Message history | Not shared | Each Expert has a single responsibility; mixing contexts breaks focus |
| Communication | Natural language | No schema versioning, maximum flexibility, humans and Experts use the same interface |
| State exchange | Workspace files | Persistent, inspectable, works across restarts |
| Interactive tools | Coordinator only | See below |
This is intentional. See Why context isolation matters for the security rationale.
Why no interactive tools for delegates?
Delegated Experts run without interactive tool access. If a delegate needs clarification:
- It should return what it knows (via
attemptCompletion) - The Coordinator receives the result
- The Coordinator can ask the user for clarification
- The Coordinator can re-delegate with better information
This keeps the user interface at the Coordinator level and prevents deep call chains from blocking on user input.
Delegation failure handling
When a Delegated Expert fails (unrecoverable error), the Job continues:
- The failed Run is marked as
stoppedByError - The error is returned to the Coordinator as the delegation result
- The Coordinator decides how to handle it (retry, try different Expert, give up)
Job (continues running)
β
βββ Run 1: Coordinator
β β
β ββ delegates to Expert B ββββββββββββ
β β β
β β Run 2: Expert B
β β β
β β β FAILS
β β β
β ββ receives error βββββββββββββββββββ
β β "Delegation failed: [error message]"
β β
β ββ decides: retry? different Expert? give up?
β βΌThis design provides resilience β a single delegate failure doesnβt crash the entire Job. The Coordinator has full control over error handling.
Workspace
The workspace is a shared filesystem where Experts read and write files. Unlike message history, the workspace persists across Expert boundaries β this is how Experts exchange complex data.
How you organize workspace files is up to you. The runtime reserves perstack/ for execution history β see Runtime for details.
Whatβs next
Ready to build Experts? See the Making Experts guide:
- Making Experts β defining Experts in
perstack.toml - Best Practices β design guidelines for effective Experts
- Skills β adding MCP tools to your Experts