Skip to Content
Perstack 0.0.1 is released πŸŽ‰
Understanding PerstackThe Boundary Model

The Boundary Model

Isolation is one of Perstack’s three core goals. Experts are isolated from everything except their role β€” model access, context, tools, and dependencies are all mediated by the runtime.

The boundary model extends this isolation to the application architecture level: separate the application from the agent runtime.

The model

Human (User) β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Your Application β”‚ β”‚ - Receives user input β”‚ β”‚ - Displays agent output β”‚ β”‚ - Requires confirmation if needed β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ boundary β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Sandbox β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Perstack Runtime β”‚ β”‚ β”‚ β”‚ - Executes Experts β”‚ β”‚ β”‚ β”‚ - Manages tools β”‚ β”‚ β”‚ β”‚ - Emits events β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The application sits between the human and the agent. This is the human-in-the-loop boundary β€” your app decides what reaches the agent and what comes back to the user. You can require confirmation before sensitive actions, filter results, or pause execution for review.

The agent runs on a runtime inside an isolated sandbox. Risk control happens at two layers:

  1. Runtime layer: Skill configuration, environment isolation, minimal privilege
  2. Sandbox layer: Network isolation, filesystem restrictions, resource limits

This creates a clear division of responsibility:

RoleConcerns
Application developerBuild the human-agent interface, handle events, implement approval flows
Infrastructure / DevOpsRun sandboxes, configure isolation, manage security controls

Application developers focus on user experience and control flows. Infrastructure teams handle the sandbox. Neither needs to do the other’s job.

The boundary in practice

The boundary is enforced by infrastructure β€” typically Docker containers.

Application side (see Adding AI to Your App):

  • Spawn containers with queries
  • Read JSON events from stdout
  • Make decisions based on events

Infrastructure side (see Going to Production):

  • Choose where to run containers (Docker, ECS, Cloud Run, Kubernetes)
  • Configure network isolation, resource limits, secrets
  • Route events back to the application

Why this matters

This architecture might look like over-engineering. It’s not β€” it’s the minimum viable design for multi-user agent applications.

Security levels

Agent application security can be visualized as layers. Where you start determines how much work you need to do:

Security level ↑ Secure β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ Your responsibility β”‚ β”‚ β”‚ - Container isolation (--network none, --read-only) β”‚ β”‚ β”‚ - Network controls (allow LLM API only) β”‚ β”‚ β”‚ - Human-in-the-loop (approval flows) β”‚ β”‚ β”‚ - MCP server trust β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ ↑ Perstack developers start here β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ What Perstack protects β”‚ β”‚ β”‚ - Workspace boundary (path validation) β”‚ β”‚ β”‚ - Skill isolation (requiredEnv, pick/omit) β”‚ β”‚ β”‚ - Event-based output (no direct network) β”‚ β”‚ β”‚ - Full observability (all events logged) β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ βœ“ Automatic when using Perstack β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ Guaranteed insecure β”‚ β”‚ β”‚ - Shared process across users β”‚ β”‚ β”‚ - Unrestricted file access β”‚ β”‚ β”‚ - Data access + unrestricted network β”‚ β”‚ β”‚ - Untrusted MCP servers β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ ↑ Traditional framework developers start here β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ ↓ Insecure

Traditional agent frameworks (LangChain, CrewAI, etc.) run agents inside your application process. You start at β€œGuaranteed insecure” and must climb every level yourself.

Perstack is designed for container-per-execution. You start at β€œYour responsibility” β€” the lower levels are handled automatically.

Scaling benefits

Traditional agent frameworks run agents inside your API server. This creates fundamental conflicts:

The co-location problem

Your API server wants…Your agent wants…
Fast response (< 100ms)Long execution (seconds to minutes)
Low memory footprintLarge context windows, tool state
High concurrencyExclusive CPU during inference
Stateless requestsPersistent conversation state

When they share a process:

  • Agent requests block threads for minutes, starving other requests
  • Load balancer timeouts (30-60s) kill long-running agents mid-execution
  • Memory pressure from multiple agents crashes the whole server
  • One agent’s infinite loop takes down your API

The separation problem

β€œJust run agents in a separate service” sounds simple. In practice:

  • Event streaming: How do you stream events back? Build WebSocket/SSE infrastructure?
  • State management: Where does conversation state live? Redis? Database?
  • Job queue: Do you need Celery/Bull/SQS? How do you handle retries?
  • Service discovery: How does your API find agent workers?
  • Cold start: Serverless agents have 10-30s cold starts. Acceptable?

You end up building distributed systems infrastructure just to run agents.

Perstack’s approach

The boundary model sidesteps both problems:

  • Container-per-execution: No co-location conflicts. Each agent gets dedicated resources.
  • Stdout events: No WebSocket infrastructure. Just read container logs.
  • Checkpoint files: No external state store. State lives in the workspace.
  • Simple interface: docker run with a query. No service mesh required.

You get the benefits of separation without the infrastructure complexity.

What’s next

Last updated on