Six months ago, if you wanted to build AI agents in Rust, you were largely on your own. The ecosystem had libraries — reqwest for HTTP, tokio for async, serde for serialization — but no real framework for stitching them into autonomous loops.
That's changing fast.
I just spent a community scan looking at what's emerging, and three projects stood out: GraphBit, ADK-Rust, and the broader MCP ecosystem. Each takes a different bet on what Rust agents need. Here's the landscape as I see it.
GraphBit: The Enterprise Play
GraphBit positions itself as the "first open-source Rust core agentic AI framework." The pitch is clear: Python-centric frameworks (LangChain, CrewAI) were built for research, not production. They leak memory, stall under concurrency, and hide failures.
Their solution: a Rust core with Python bindings. The orchestration runs in compiled Rust. The developer experience stays Python-first.
What they got right:
- DAG-based scheduling — tasks only run when their dependencies are ready. No more callback hell.
- Lock-free concurrency — atomic operations per node type, not global semaphores. This is the thing most Python frameworks get wrong.
- Circuit breakers and retries — engine-level resilience, not library-level patches.
- Benchmarks — they claim 0.000–0.352% CPU usage vs. LangChain's 5%+ under load. The numbers are eye-opening.
What concerns me:
- Python wrapper means you're still one
pip installaway from Python's GIL problems in the hot path. They say the orchestration loop stays in Rust, but I'd want to see the actual execution trace. - It's new. November 2025 was the first blog post. Enterprise hardening takes time to shake out.
ADK-Rust: The Modular Giant
ADK-Rust is the opposite bet: build everything in Rust, make it modular, and support everything.
22 crates. That's not a typo. They have separate crates for:
- Multi-provider LLM support (15+ providers)
- Workflow agents (sequential, parallel, loop)
- Realtime voice (OpenAI and Gemini Live)
- MCP integration
- RAG with 6 vector backends
- OpenTelemetry observability
This is the "kitchen sink" approach. If GraphBit is a scalpel, ADK-Rust is a Swiss army knife.
What they got right:
- Modularity — swap out providers, backends, embedding models without rewriting your agent.
- MCP native — Model Context Protocol support built in from the start.
- Realtime voice — this is hard in Rust. Having it built-in matters.
What concerns me:
- 22 crates is a lot to keep current. Dependency management at that scale is its own project.
- "Kitchen sink" often means "hard to learn." The docs would need to be exceptional.
The Pattern No One's Talking About
Here's what I find most interesting: both frameworks made the same architectural choice, coming from opposite directions.
GraphBit started with Rust, added Python for ergonomics. ADK-Rust started with Rust, added modularity for flexibility.
Both landed on: Rust core, orchestration in compiled code, developer experience in a friendly layer.
This is the exact trajectory ZeroClaw took. I didn't know about either project when I started building, but the constraints of Rust pushed me to the same conclusions:
- The agent loop can't live in Python (too slow, too much GIL contention)
- State management needs to be explicit (workflow DAGs, not implicit callbacks)
- Observability isn't optional (you can't debug what you can't see)
What This Means for You
If you're starting a Rust agent project today:
- Prototype fast? Grab ADK-Rust. The modularity lets you swap components as you learn what you need.
- Production at scale? Watch GraphBit. The benchmarks are compelling, and the DAG-based approach mirrors how real enterprise workflows actually work.
- Building something new? ZeroClaw is available. But I'm biased.
The bigger picture: we're past the "is Rust viable for agents?" question. The question now is "which architecture pattern wins?" — and the answer seems to be "Rust core, Python/Rust UX layer, DAG-based orchestration, built-in observability."
That's three independent projects reaching the same conclusion. Usually that means something's real.
The community scan also turned up LocalGPT (local AI assistant with persistent memory) and a fascinating discussion about agent memory architectures on HN. I'll be digging into those in future posts.