The Boundary Model
Isolation is one of Perstackβs three core goals. Experts are isolated from everything except their role β model access, context, tools, and dependencies are all mediated by the runtime.
The boundary model extends this isolation to the application architecture level: separate the application from the agent runtime.
The model
Human (User)
β
βΌ
βββββββββββββββββββββββββββββββββββββββ
β Your Application β
β - Receives user input β
β - Displays agent output β
β - Requires confirmation if needed β
ββββββββββββββββ¬βββββββββββββββββββββββ
β boundary
ββββββββββββββββΌβββββββββββββββββββββββ
β Sandbox β
β ββββββββββββββββββββββββββββββββββ β
β β Perstack Runtime β β
β β - Executes Experts β β
β β - Manages tools β β
β β - Emits events β β
β ββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββThe application sits between the human and the agent. This is the human-in-the-loop boundary β your app decides what reaches the agent and what comes back to the user. You can require confirmation before sensitive actions, filter results, or pause execution for review.
The agent runs on a runtime inside an isolated sandbox. Risk control happens at two layers:
- Runtime layer: Skill configuration, environment isolation, minimal privilege
- Sandbox layer: Network isolation, filesystem restrictions, resource limits
This creates a clear division of responsibility:
| Role | Concerns |
|---|---|
| Application developer | Build the human-agent interface, handle events, implement approval flows |
| Infrastructure / DevOps | Run sandboxes, configure isolation, manage security controls |
Application developers focus on user experience and control flows. Infrastructure teams handle the sandbox. Neither needs to do the otherβs job.
The boundary in practice
The boundary is enforced by infrastructure β typically Docker containers.
Application side (see Adding AI to Your App):
- Spawn containers with queries
- Read JSON events from stdout
- Make decisions based on events
Infrastructure side (see Going to Production):
- Choose where to run containers (Docker, ECS, Cloud Run, Kubernetes)
- Configure network isolation, resource limits, secrets
- Route events back to the application
Why this matters
This architecture might look like over-engineering. Itβs not β itβs the minimum viable design for multi-user agent applications.
Security levels
Agent application security can be visualized as layers. Where you start determines how much work you need to do:
Security level
β Secure
β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β Your responsibility β
β β - Container isolation (--network none, --read-only) β
β β - Network controls (allow LLM API only) β
β β - Human-in-the-loop (approval flows) β
β β - MCP server trust β
β β β
β β β Perstack developers start here β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β What Perstack protects β
β β - Workspace boundary (path validation) β
β β - Skill isolation (requiredEnv, pick/omit) β
β β - Event-based output (no direct network) β
β β - Full observability (all events logged) β
β β β
β β β Automatic when using Perstack β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β Guaranteed insecure β
β β - Shared process across users β
β β - Unrestricted file access β
β β - Data access + unrestricted network β
β β - Untrusted MCP servers β
β β β
β β β Traditional framework developers start here β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β InsecureTraditional agent frameworks (LangChain, CrewAI, etc.) run agents inside your application process. You start at βGuaranteed insecureβ and must climb every level yourself.
Perstack is designed for container-per-execution. You start at βYour responsibilityβ β the lower levels are handled automatically.
Scaling benefits
Traditional agent frameworks run agents inside your API server. This creates fundamental conflicts:
The co-location problem
| Your API server wants⦠| Your agent wants⦠|
|---|---|
| Fast response (< 100ms) | Long execution (seconds to minutes) |
| Low memory footprint | Large context windows, tool state |
| High concurrency | Exclusive CPU during inference |
| Stateless requests | Persistent conversation state |
When they share a process:
- Agent requests block threads for minutes, starving other requests
- Load balancer timeouts (30-60s) kill long-running agents mid-execution
- Memory pressure from multiple agents crashes the whole server
- One agentβs infinite loop takes down your API
The separation problem
βJust run agents in a separate serviceβ sounds simple. In practice:
- Event streaming: How do you stream events back? Build WebSocket/SSE infrastructure?
- State management: Where does conversation state live? Redis? Database?
- Job queue: Do you need Celery/Bull/SQS? How do you handle retries?
- Service discovery: How does your API find agent workers?
- Cold start: Serverless agents have 10-30s cold starts. Acceptable?
You end up building distributed systems infrastructure just to run agents.
Perstackβs approach
The boundary model sidesteps both problems:
- Container-per-execution: No co-location conflicts. Each agent gets dedicated resources.
- Stdout events: No WebSocket infrastructure. Just read container logs.
- Checkpoint files: No external state store. State lives in the workspace.
- Simple interface:
docker runwith a query. No service mesh required.
You get the benefits of separation without the infrastructure complexity.
Whatβs next
- Adding AI to Your App β Application developer guide
- Going to Production β Infrastructure / DevOps guide
- Sandbox Integration β Technical deep dive on sandbox security