Skip to Content
Perstack 0.0.1 is released 🎉
ReferencesCLI Reference

CLI Reference

Running Experts

perstack start

Interactive workbench for developing and testing Experts.

perstack start [expertKey] [query] [options]

Arguments:

  • [expertKey]: Expert key (optional — prompts if not provided)
  • [query]: Input query (optional — prompts if not provided)

Opens a text-based UI for iterating on Expert definitions. See Running Experts.

perstack run

Headless execution for production and automation.

perstack run <expertKey> <query> [options]

Arguments:

  • <expertKey>: Expert key (required)
    • Examples: my-expert, @org/my-expert, @org/my-expert@1.0.0
  • <query>: Input query (required)

Outputs JSON events to stdout.

Shared Options

Both start and run accept the same options:

Model and Provider

OptionDescriptionDefault
--provider <provider>LLM provideranthropic
--model <model>Model nameclaude-sonnet-4-5
--temperature <temp>Temperature (0.0-1.0)0.3

Providers: anthropic, google, openai, ollama, azure-openai, amazon-bedrock, google-vertex

Execution Control

OptionDescriptionDefault
--max-steps <n>Maximum total steps across all Runs in a Jobunlimited
--max-retries <n>Max retry attempts per generation5
--timeout <ms>Timeout per generation (ms)60000

Runtime

OptionDescriptionDefault
--runtime <runtime>Execution runtimeperstack

Available runtimes:

  • perstack — Built-in runtime (default)
  • cursor — Cursor CLI (experimental)
  • claude-code — Claude Code CLI (experimental)
  • gemini — Gemini CLI (experimental)

See Multi-Runtime Support for setup and limitations.

Configuration

OptionDescriptionDefault
--config <path>Path to perstack.tomlAuto-discover from cwd
--env-path <path...>Environment file paths.env, .env.local

Job and Run Management

OptionDescription
--job-id <id>Custom Job ID for new Job (default: auto-generated)
--continueContinue latest Job with new Run
--continue-job <id>Continue specific Job with new Run
--resume-from <id>Resume from specific checkpoint (requires --continue-job)

Combining options:

# Continue latest Job from its latest checkpoint --continue # Continue specific Job from its latest checkpoint --continue-job <jobId> # Continue specific Job from a specific checkpoint --continue-job <jobId> --resume-from <checkpointId>

Note: --resume-from requires --continue-job (Job ID must be specified). You can only resume from the Coordinator Expert’s checkpoints.

Interactive

OptionDescription
-i, --interactive-tool-call-resultTreat query as interactive tool call result

Use with --continue to respond to interactive tool calls from the Coordinator Expert.

Other

OptionDescription
--verboseEnable verbose logging

Examples

# Basic execution (creates new Job) npx perstack run my-expert "Review this code" # With model options npx perstack run my-expert "query" \ --provider google \ --model gemini-2.5-pro \ --temperature 0.7 \ --max-steps 100 # Continue Job with follow-up npx perstack run my-expert "initial query" npx perstack run my-expert "follow-up" --continue # Continue specific Job from latest checkpoint npx perstack run my-expert "continue" --continue-job job_abc123 # Continue specific Job from specific checkpoint npx perstack run my-expert "retry with different approach" \ --continue-job job_abc123 \ --resume-from checkpoint_xyz # Custom Job ID for new Job npx perstack run my-expert "query" --job-id my-custom-job # Respond to interactive tool call npx perstack run my-expert "user response" --continue -i # Custom config npx perstack run my-expert "query" \ --config ./configs/production.toml \ --env-path .env.production # Registry Experts npx perstack run tic-tac-toe "Let's play!" npx perstack run @org/expert@1.0.0 "query" # Non-default runtimes npx perstack run my-expert "query" --runtime cursor npx perstack run my-expert "query" --runtime claude-code npx perstack run my-expert "query" --runtime gemini

Registry Management

perstack publish

Publish an Expert to the registry.

perstack publish [expertName] [options]

Arguments:

  • [expertName]: Expert name from perstack.toml (prompts if not provided)

Options:

OptionDescription
--config <path>Path to perstack.toml
--dry-runValidate without publishing

Example:

perstack publish my-expert perstack publish my-expert --dry-run

Requires PERSTACK_API_KEY environment variable.

Note: Published Experts must use npx or uvx as skill commands. Arbitrary commands are not allowed for security reasons. See Publishing.

perstack unpublish

Remove an Expert version from the registry.

perstack unpublish [expertKey] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)

Options:

OptionDescription
--config <path>Path to perstack.toml
--forceSkip confirmation (required for non-interactive)

Example:

perstack unpublish # Interactive mode perstack unpublish my-expert@1.0.0 --force # Non-interactive

perstack tag

Add or update tags on an Expert version.

perstack tag [expertKey] [tags...] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)
  • [tags...]: Tags to set (e.g., stable, beta)

Options:

OptionDescription
--config <path>Path to perstack.toml

Example:

perstack tag # Interactive mode perstack tag my-expert@1.0.0 stable beta # Set tags directly

perstack status

Change the status of an Expert version.

perstack status [expertKey] [status] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)
  • [status]: New status (available, deprecated, disabled)

Options:

OptionDescription
--config <path>Path to perstack.toml

Example:

perstack status # Interactive mode perstack status my-expert@1.0.0 deprecated
StatusMeaning
availableNormal, visible in registry
deprecatedStill usable but discouraged
disabledCannot be executed