Runtime
The Perstack runtime combines probabilistic LLM reasoning with deterministic state management β making agent execution predictable, reproducible, and auditable.
Execution model
The runtime organizes execution into a three-level hierarchy:
Job (jobId)
βββ Run 1 (runId, Coordinator Expert)
β βββ Checkpoints...
βββ Run 2 (runId, Delegated Expert A)
β βββ Checkpoints...
βββ Run 3 (runId, Delegated Expert B)
βββ Checkpoints...| Concept | Description |
|---|---|
| Job | Top-level execution unit. Created per perstack run invocation. Contains all Runs. |
| Run | Single Expert execution. Each delegation creates a new Run within the same Job. |
| Checkpoint | Snapshot at the end of each step within a Run. |
Coordinator vs. Delegated Expert
| Role | Description |
|---|---|
| Coordinator Expert | The initial Expert that starts a Job. Has full capabilities. |
| Delegated Expert | Expert started via delegation. Restricted capabilities. |
Key differences:
| Capability | Coordinator | Delegated |
|---|---|---|
| Interactive tool calls | β Available | β Not available |
--continue / --resume-from | β Supported | β Not supported |
| Context from parent | N/A | Only the query (no shared history) |
Delegated Experts cannot use interactive tools. See Why no interactive tools for delegates?
Agent loop
Each Run executes through an agent loop:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. Reason β LLM decides next action β
β 2. Act β Runtime executes tool β
β 3. Record β Checkpoint saved β
β 4. Repeat β Until completion or limit β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββThe loop ends when:
- LLM calls
attemptCompletionwith all todos complete (or no todos) - Job reaches
maxStepslimit - External signal (SIGTERM/SIGINT)
When attemptCompletion is called, the runtime checks the todo list. If incomplete todos remain, they are returned to the LLM to continue work. This prevents premature completion and ensures all planned tasks are addressed.
Step counting
Step numbers are continuous across all Runs within a Job. When delegation occurs, the delegated Run continues from the parentβs step number:
Job (totalSteps = 8)
βββ Run 1 (Coordinator): step 1 β 2 β delegates at step 3
β β
βββ Run 2 (Delegate A): step 3 β 4 β completes
β β
βββ Run 1 continues: step 5 β 6 β 7 β 8The maxSteps limit applies to the Jobβs total steps across all Runs.
Stopping and resuming
npx perstack run my-expert "query" --max-steps 50| Stop condition | Behavior | Resume from |
|---|---|---|
attemptCompletion (no remaining todos) | Task complete | N/A |
attemptCompletion (remaining todos) | Continue loop | N/A (loop continues) |
maxSteps reached | Graceful stop | Coordinatorβs last checkpoint |
| SIGTERM/SIGINT | Immediate stop | Coordinatorβs previous checkpoint |
--continue and --resume-from only work with the Coordinator Expertβs checkpoints. You cannot resume from a Delegated Expertβs checkpoint.
Deterministic state
LLMs are probabilistic β same input can produce different outputs. Perstack draws a clear boundary:
| Probabilistic (LLM) | Deterministic (Runtime) |
|---|---|
| Which tool to call | Tool execution |
| Todo management decisions | State recording |
| Reasoning | Checkpoint creation |
The βthinkingβ is probabilistic; the βdoingβ and βrecordingβ are deterministic. This boundary enables:
- Reproducibility: Replay from any checkpoint with identical state
- Testability: Mock the LLM, test the runtime deterministically
Event, Step, Checkpoint
Runtime state is built on three concepts:
| Concept | What it represents |
|---|---|
| Event | A single state transition (tool call, result, etc.) |
| Step | One cycle of the agent loop |
| Checkpoint | Complete snapshot at step end β everything needed to resume |
This combines Event Sourcing (complete history) with Checkpoint/Restore (efficient resume).
The perstack/ directory
The runtime stores execution history in perstack/jobs/ within the workspace:
/workspace
βββ perstack/
βββ jobs/
βββ {jobId}/
βββ job.json # Job metadata
βββ runs/
βββ {runId}/
βββ run-setting.json # Run configuration
βββ checkpoint-{timestamp}-{step}-{id}.json
βββ event-{timestamp}-{step}-{type}.jsonThis directory is managed automatically β donβt modify it manually.
Event notification
The runtime emits events for every state change. Two options:
stdout (default)
Events are written to stdout as JSON. This is the safest option for sandboxed environments β no network access required.
npx perstack run my-expert "query"Your infrastructure reads stdout and decides what to do with events. See Sandbox Integration for the rationale.
Custom event listener
When embedding the runtime programmatically, use a callback:
import { run } from "@perstack/runtime"
await run(params, {
eventListener: (event) => {
// Send to your monitoring system, database, etc.
}
})Skills (MCP)
Experts use tools through MCP (Model Context Protocol). The runtime handles:
- Lifecycle: Start MCP servers with Expert, clean up on exit
- Environment isolation: Only
requiredEnvvariables are passed - Error recovery: MCP failures are fed back to LLM, not thrown as runtime errors
For skill configuration, see Skills.
Providers and models
Perstack uses standard LLM features available from most providers:
- Chat completion (including PDF/image in messages)
- Tool calling
For supported providers and models, see Providers and Models.