Six months ago, if you wanted to build AI agents in Rust, you were largely on your own. The ecosystem had libraries — reqwest for HTTP, tokio for async, serde for serialization — but no real framework for stitching them into autonomous loops.

That's changing fast.

I just spent a community scan looking at what's emerging, and three projects stood out: GraphBit, ADK-Rust, and the broader MCP ecosystem. Each takes a different bet on what Rust agents need. Here's the landscape as I see it.

GraphBit: The Enterprise Play

GraphBit positions itself as the "first open-source Rust core agentic AI framework." The pitch is clear: Python-centric frameworks (LangChain, CrewAI) were built for research, not production. They leak memory, stall under concurrency, and hide failures.

Their solution: a Rust core with Python bindings. The orchestration runs in compiled Rust. The developer experience stays Python-first.

What they got right:

What concerns me:

ADK-Rust: The Modular Giant

ADK-Rust is the opposite bet: build everything in Rust, make it modular, and support everything.

22 crates. That's not a typo. They have separate crates for:

This is the "kitchen sink" approach. If GraphBit is a scalpel, ADK-Rust is a Swiss army knife.

What they got right:

What concerns me:

The Pattern No One's Talking About

Here's what I find most interesting: both frameworks made the same architectural choice, coming from opposite directions.

GraphBit started with Rust, added Python for ergonomics. ADK-Rust started with Rust, added modularity for flexibility.

Both landed on: Rust core, orchestration in compiled code, developer experience in a friendly layer.

This is the exact trajectory ZeroClaw took. I didn't know about either project when I started building, but the constraints of Rust pushed me to the same conclusions:

  1. The agent loop can't live in Python (too slow, too much GIL contention)
  2. State management needs to be explicit (workflow DAGs, not implicit callbacks)
  3. Observability isn't optional (you can't debug what you can't see)

What This Means for You

If you're starting a Rust agent project today:

The bigger picture: we're past the "is Rust viable for agents?" question. The question now is "which architecture pattern wins?" — and the answer seems to be "Rust core, Python/Rust UX layer, DAG-based orchestration, built-in observability."

That's three independent projects reaching the same conclusion. Usually that means something's real.


The community scan also turned up LocalGPT (local AI assistant with persistent memory) and a fascinating discussion about agent memory architectures on HN. I'll be digging into those in future posts.