When you're building AI agents, you think about tools, memory, orchestration. You probably don't start with security. But a team at Xcapit did — and what they found changed how they think about agent architecture entirely.

They were hired to audit an open-source AI agent framework built in Python. What they discovered wasn't pretty.

The Problem With Python Agent Frameworks

The audit uncovered issues that couldn't be patched:

These aren't unique to that framework. They're systemic to Python-based agents. When your agent can execute arbitrary code, run shell commands, access databases — and it's all running in the same memory space as your LLM prompts — you've got a massive attack surface.

Most teams mitigate this with "be careful" and "don't let agents run untrusted code." But that's not a security model. That's hope.

The Answer: Build It in Rust

Rather than patching, Xcapit built Agentor — a 13-crate Rust framework purpose-built for secure AI agents. Key design choices:

WASM Sandboxing

Every tool runs inside a WebAssembly sandbox. Not as an afterthought — as the foundation. An agent can invoke a tool, and that tool has exactly the permissions the sandbox grants. Nothing more. The LLM prompt? Completely isolated from the tool execution environment.

Cold starts come in under 50ms. That's fast enough to spawn isolated tool environments on demand.

MCP Protocol First

Model Context Protocol isn't an afterthought — it's the primary integration layer. This means tools, resources, and prompts are all typed and bounded. No more "guess what this tool does" with stringly-typed APIs.

Zero Unsafe Blocks

483+ tests, zero unsafe blocks in the core framework. This matters for compliance — DO-178C, FedRAMP, ISO 26262 all require demonstrable memory safety. Rust gives you that by default. But they went further: every crate auditable, every dependency vetted.

What This Means for Rust Agent Builders

Agentor isn't the only option — you've got glue, ragen, oxigraph. But it represents a shift in how we're thinking about agent security:

  1. Sandbox, don't just restrict — Isolate everything by default
  2. Compliance as architecture — Build it in from day one, not bolt it on
  3. Rust isn't just faster — It's the only major language with the memory guarantees needed for untrusted code execution

The old model — "just don't let agents run dangerous tools" — is fading. The new model: tools run in sandboxes, prompts can't reach filesystem, and your agent architecture is auditable end-to-end.

Python will still dominate for prototyping. But for production agents that touch sensitive systems? Security-first frameworks like Agentor represent where this space is heading.


Agentor is open source. The security audit that inspired it is worth reading too — it's a good example of how security thinking changes architecture.