Skip to Content
Perstack 0.0.1 is released πŸŽ‰
Using ExpertsMulti-Runtime Support

Multi-Runtime Support

Perstack supports running Experts through third-party coding agent runtimes. Instead of using the default runtime, you can leverage Cursor, Claude Code, or Gemini CLI as the execution engine.

This feature is experimental. Some capabilities may be limited depending on the runtime.

Why use non-default runtimes?

Your Expert definitions are your assets

In the agent-first era, Expert definitions are the single source of truth β€” not the runtime, not the app, not the vendor platform. Your carefully crafted instructions, delegation patterns, and skill configurations represent accumulated domain knowledge. They should be:

  • Portable β€” run on any compatible runtime
  • Comparable β€” test the same definition across different runtimes to measure cost vs. performance
  • Shareable β€” publish to the registry and let others run your Experts on their preferred runtime

No vendor lock-in

Agent definitions should not be trapped in vendor silos. With multi-runtime support:

Traditional approachPerstack approach
Agent locked to one platformExpert runs on any runtime
Switching requires rewriteSwitching requires one flag
Vendor controls your agentYou control your Expert

Practical benefits

BenefitDescription
Cost/performance comparisonRun the same Expert on Cursor, Claude Code, and Gemini β€” compare results and costs
Runtime-specific strengthsLeverage Cursor’s codebase indexing, Claude’s reasoning, Gemini’s speed
Registry interoperabilityInstantly try any published Expert on your preferred runtime
Subscription leverageUse existing subscriptions (Cursor Pro, Claude Max) instead of API credits

Supported runtimes

RuntimeModel SupportDomainSkill Definition
perstackMulti-vendorGeneral purposeβœ… Via perstack.toml
cursorMulti-vendorCoding-focused⚠️ Via Cursor settings
claude-codeClaude onlyCoding-focused⚠️ Via claude mcp
geminiGemini onlyGeneral purpose⚠️ Via Gemini config

Skill definition in perstack.toml only works with the default Perstack runtime. Other runtimes have their own tool/MCP configurations β€” you must set them up separately in each runtime.

Basic usage

npx perstack run my-expert "query" --runtime cursor npx perstack run my-expert "query" --runtime claude-code npx perstack run my-expert "query" --runtime gemini

Runtime selection

How the runtime is determined depends on whether you’re running an Expert directly or delegating to one.

Direct execution (Coordinator Expert)

When running an Expert via CLI, use --runtime to explicitly specify the runtime:

npx perstack run my-expert "query" --runtime cursor

If --runtime is not specified, the Perstack runtime is used by default.

Delegation (Delegate Expert)

When a Coordinator Expert delegates to another Expert, the Coordinator decides which runtime(s) to use by passing the runtime parameter to the delegation tool:

# Coordinator calls delegation tool with: delegate("code-reviewer", query: "Review this code", runtime: ["cursor", "claude-code"])

The Coordinator should choose from the runtimes declared in the delegate’s runtime field (compatibility declaration). If runtime is not specified, Perstack runtime is used by default.

ScenarioRuntime selection
Direct execution without --runtimeperstack (default)
Direct execution with --runtime cursorcursor
Delegation without runtime paramperstack (default)
Delegation with runtime: "cursor"cursor
Delegation with runtime: ["cursor", "claude-code"]Both in parallel

The runtime field in Expert definitions is a compatibility declaration β€” it declares which runtimes the Expert can run on. The actual runtime selection happens at execution time via --runtime (CLI) or runtime parameter (delegation).

Example: Meta code review

This example demonstrates running the same Expert across multiple runtimes by specifying the runtime parameter in the delegation call.

Use case: Get code review feedback from both Cursor and Claude Code using identical instructions. Each runtime brings unique capabilities (Cursor’s codebase indexing vs Claude Code’s deep reasoning), producing different insights from the same prompt.

Expert definition

# perstack.toml [experts."code-reviewer"] runtime = ["cursor", "claude-code"] # Compatible with both runtimes description = "Reviews code for quality, security, and best practices" instruction = """ You are a senior code reviewer. Analyze the codebase and provide feedback on: - Code quality and maintainability - Security vulnerabilities - Performance issues - Best practices violations Write your review to `{reviewer}-review.md` where {reviewer} is your name (e.g., cursor, claude). """ [experts."meta-reviewer"] # runtime defaults to "perstack" description = "Aggregates and synthesizes multiple code reviews" instruction = """ You are a meta-reviewer that orchestrates parallel code reviews. When asked to review code: 1. Delegate to code-reviewer with runtime: ["cursor", "claude-code"] to run on both runtimes in parallel 2. Read all *-review.md files in the workspace 3. Identify common issues raised by multiple reviewers 4. Highlight unique insights from each review 5. Prioritize findings by severity 6. Create a unified action plan in `meta-review.md` """ delegates = ["code-reviewer"]

Running the workflow

A single command triggers the entire multi-runtime workflow:

npx perstack run meta-reviewer "Review the src/ directory"

How parallel delegation works

When the meta-reviewer calls the delegation tool with runtime: ["cursor", "claude-code"]:

meta-reviewer (perstack) β”‚ └─► delegate code-reviewer (runtime: ["cursor", "claude-code"]) β”‚ β”œβ”€β–Ί cursor-agent --print ──► cursor-review.md β”‚ (same instruction) β”‚ └─► claude -p ──► claude-review.md (same instruction) β”‚ β–Ό Both reviews collected β”‚ β–Ό meta-reviewer reads both files and creates unified meta-review.md

The delegation tool accepts an optional runtime parameter:

  • Single runtime: runtime: "cursor" β€” runs on Cursor only
  • Multiple runtimes: runtime: ["cursor", "claude-code"] β€” runs in parallel on both
  • Not specified: defaults to "perstack" (built-in runtime)

Why this works

The key insight is that identical instructions produce different results depending on the runtime’s capabilities:

RuntimeSame instruction, different strength
CursorLeverages codebase indexing, finds cross-file issues
Claude CodeDeep reasoning, catches subtle security issues
PerstackOrchestrates the workflow, aggregates results

This pattern works because all runtimes write to the same workspace. Each runtime knows its own identity, so the instruction simply asks it to name the output file accordingly β€” no variable injection needed.

The runtime field is inspired by the runtime field in WinterCG’s Runtime KeysΒ  proposal for package.json. Just as npm packages can declare compatible runtimes, Experts can declare which agent runtimes they target.

How it works

When you specify a non-default runtime, Perstack:

  1. Converts the Expert definition into the runtime’s native format
  2. Executes the runtime CLI in headless mode
  3. Captures the output and converts events to Perstack format
  4. Stores checkpoints in the standard perstack/jobs/ directory
perstack run --runtime <runtime> β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Runtime Adapter β”‚ β”‚ (converts Expert β”‚ β”‚ to CLI arguments) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Runtime CLI β”‚ β”‚ (headless mode) β”‚ β”‚ β”‚ β”‚ cursor-agent --print β”‚ β”‚ claude -p "..." --append-system-prompt "..." β”‚ gemini -p "..." β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Event Normalization β”‚ β”‚ β†’ Perstack format β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ perstack/jobs/ β”‚ β”‚ (Job/Run/Checkpoint) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Runtime-specific setup

Cursor

Prerequisites:

  • Cursor CLI installed (curl https://cursor.com/install -fsS | bash)
  • CURSOR_API_KEY environment variable set

How Expert definitions are mapped:

  • instruction β†’ Passed via cursor-agent --print "..." prompt argument
  • skills β†’ ⚠️ Not supported (headless mode has no MCP)
  • delegates β†’ Included in prompt as context

Cursor headless CLI (cursor-agent --print) does not support MCP tools. Only built-in capabilities (file read/write, shell commands via --force) are available. Custom skills defined in perstack.toml will not work.

Claude Code

Prerequisites:

  • Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
  • Authenticated via claude command

How Expert definitions are mapped:

  • instruction β†’ Passed via --append-system-prompt flag
  • skills β†’ ⚠️ Not injectable (runtime uses its own MCP config)
  • delegates β†’ Included in system prompt as context

Claude Code has its own MCP configuration (claude mcp), but Perstack cannot inject skills into it. The runtime uses whatever MCP servers the user has configured separately. Skills defined in perstack.toml will not be available.

Gemini CLI

Prerequisites:

  • Gemini CLI installed
  • GEMINI_API_KEY environment variable set

How Expert definitions are mapped:

  • instruction β†’ Passed via gemini -p "..." prompt argument
  • skills β†’ ⚠️ Not supported (MCP unavailable)
  • delegates β†’ Included in prompt as context

Gemini CLI does not support MCP. Skills defined in perstack.toml will not be available. Use Gemini’s built-in file/shell capabilities instead.

Limitations

Delegation

Non-default runtimes do not natively support Expert-to-Expert delegation. When using --runtime, delegation behavior depends on the adapter:

RuntimeDelegation handling
perstackβœ… Native support
cursorInstruction-based (LLM decides)
claude-codeInstruction-based (LLM decides)
geminiInstruction-based (LLM decides)

With instruction-based delegation, the delegate Expert’s description is included in the system prompt, and the LLM is instructed to β€œthink as” the delegate when appropriate. This is less reliable than native delegation.

Interactive skills

Interactive tools (interactiveSkill) are handled differently:

RuntimeInteractive tools
perstackβœ… Native support with --continue -i
cursorMapped to Cursor’s confirmation prompts
claude-codeMapped to Claude’s permission system
geminiNot supported in headless mode

Checkpoint compatibility

Checkpoints created with non-default runtimes use a normalized format. You can:

  • βœ… View checkpoints with perstack start --continue-job
  • βœ… Query job history
  • ⚠️ Resume may have limitations (runtime-specific state not preserved)

Best practices

  1. Start with the default runtime during development for full skill control
  2. Design skill-free Experts when targeting non-default runtimes (skill definitions in perstack.toml are ignored)
  3. Configure tools in each runtime β€” set up MCP servers via claude mcp, Cursor settings, etc.
  4. Keep delegation simple β€” non-default runtimes emulate delegation via instruction
  5. Leverage built-in capabilities β€” non-default runtimes have their own file/shell tools

What’s next