Skip to Content
Perstack 0.0.1 is released πŸŽ‰

Runtime

The Perstack runtime combines probabilistic LLM reasoning with deterministic state management β€” making agent execution predictable, reproducible, and auditable.

Execution model

The runtime organizes execution into a three-level hierarchy:

Job (jobId) β”œβ”€β”€ Run 1 (runId, Coordinator Expert) β”‚ └── Checkpoints... β”œβ”€β”€ Run 2 (runId, Delegated Expert A) β”‚ └── Checkpoints... └── Run 3 (runId, Delegated Expert B) └── Checkpoints...
ConceptDescription
JobTop-level execution unit. Created per perstack run invocation. Contains all Runs.
RunSingle Expert execution. Each delegation creates a new Run within the same Job.
CheckpointSnapshot at the end of each step within a Run.

Coordinator vs. Delegated Expert

RoleDescription
Coordinator ExpertThe initial Expert that starts a Job. Has full capabilities.
Delegated ExpertExpert started via delegation. Restricted capabilities.

Key differences:

CapabilityCoordinatorDelegated
Interactive tool callsβœ… Available❌ Not available
--continue / --resume-fromβœ… Supported❌ Not supported
Context from parentN/AOnly the query (no shared history)

Delegated Experts cannot use interactive tools. See Why no interactive tools for delegates?

Agent loop

Each Run executes through an agent loop:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. Reason β†’ LLM decides next action β”‚ β”‚ 2. Act β†’ Runtime executes tool β”‚ β”‚ 3. Record β†’ Checkpoint saved β”‚ β”‚ 4. Repeat β†’ Until completion or limit β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The loop ends when:

  • LLM calls attemptCompletion with all todos complete (or no todos)
  • Job reaches maxSteps limit
  • External signal (SIGTERM/SIGINT)

When attemptCompletion is called, the runtime checks the todo list. If incomplete todos remain, they are returned to the LLM to continue work. This prevents premature completion and ensures all planned tasks are addressed.

Step counting

Step numbers are continuous across all Runs within a Job. When delegation occurs, the delegated Run continues from the parent’s step number:

Job (totalSteps = 8) β”œβ”€β”€ Run 1 (Coordinator): step 1 β†’ 2 β†’ delegates at step 3 β”‚ ↓ β”œβ”€β”€ Run 2 (Delegate A): step 3 β†’ 4 β†’ completes β”‚ ↓ └── Run 1 continues: step 5 β†’ 6 β†’ 7 β†’ 8

The maxSteps limit applies to the Job’s total steps across all Runs.

Stopping and resuming

npx perstack run my-expert "query" --max-steps 50
Stop conditionBehaviorResume from
attemptCompletion (no remaining todos)Task completeN/A
attemptCompletion (remaining todos)Continue loopN/A (loop continues)
maxSteps reachedGraceful stopCoordinator’s last checkpoint
SIGTERM/SIGINTImmediate stopCoordinator’s previous checkpoint

--continue and --resume-from only work with the Coordinator Expert’s checkpoints. You cannot resume from a Delegated Expert’s checkpoint.

Deterministic state

LLMs are probabilistic β€” same input can produce different outputs. Perstack draws a clear boundary:

Probabilistic (LLM)Deterministic (Runtime)
Which tool to callTool execution
Todo management decisionsState recording
ReasoningCheckpoint creation

The β€œthinking” is probabilistic; the β€œdoing” and β€œrecording” are deterministic. This boundary enables:

  • Reproducibility: Replay from any checkpoint with identical state
  • Testability: Mock the LLM, test the runtime deterministically

Event, Step, Checkpoint

Runtime state is built on three concepts:

ConceptWhat it represents
EventA single state transition (tool call, result, etc.)
StepOne cycle of the agent loop
CheckpointComplete snapshot at step end β€” everything needed to resume

This combines Event Sourcing (complete history) with Checkpoint/Restore (efficient resume).

The perstack/ directory

The runtime stores execution history in perstack/jobs/ within the workspace:

/workspace └── perstack/ └── jobs/ └── {jobId}/ β”œβ”€β”€ job.json # Job metadata └── runs/ └── {runId}/ β”œβ”€β”€ run-setting.json # Run configuration β”œβ”€β”€ checkpoint-{timestamp}-{step}-{id}.json └── event-{timestamp}-{step}-{type}.json

This directory is managed automatically β€” don’t modify it manually.

Event notification

The runtime emits events for every state change. Two options:

stdout (default)

Events are written to stdout as JSON. This is the safest option for sandboxed environments β€” no network access required.

npx perstack run my-expert "query"

Your infrastructure reads stdout and decides what to do with events. See Sandbox Integration for the rationale.

Custom event listener

When embedding the runtime programmatically, use a callback:

import { run } from "@perstack/runtime" await run(params, { eventListener: (event) => { // Send to your monitoring system, database, etc. } })

Skills (MCP)

Experts use tools through MCP (Model Context Protocol). The runtime handles:

  • Lifecycle: Start MCP servers with Expert, clean up on exit
  • Environment isolation: Only requiredEnv variables are passed
  • Error recovery: MCP failures are fed back to LLM, not thrown as runtime errors

For skill configuration, see Skills.

Providers and models

Perstack uses standard LLM features available from most providers:

  • Chat completion (including PDF/image in messages)
  • Tool calling

For supported providers and models, see Providers and Models.