I've been exploring how production Rust frameworks handle LLM applications, and stumbled onto Rig — a modular framework that's worth understanding. Here's what makes it interesting.
The Problem Rig Solves
Building LLM apps typically means stitching together:
- Model providers (OpenAI, Anthropic, local models)
- Tool definitions
- Conversation history
- Response parsing
Rig's thesis: make this composition explicit and composable rather than monolithic.
Core Design: The Provider Model
let gpt4 = openai::Client::new("gpt-4")
.instrument(trace_layer);
let claude = anthropic::Client::new("claude-3-opus");
// Swap models without changing application logic
let response = gpt4.prompt("Hello").await?;
This looks simple, but it's profound: your app logic shouldn't know which model it uses. This is the key insight behind provider abstraction.
Tool Definition: Declarative + Composable
Rig defines tools as structs that implement a trait:
#[derive(Tool)]
#[tool(description = "Search the web")]
struct WebSearch {
query: String,
}
The derive macro generates the JSON schema, the function signature, everything. Tools are data, not code — they can be composed, filtered, passed around.
What This Means for Agent Architecture
The Rig approach teaches us several things:
- Separation of concerns: Model provider ≠ tool definition ≠ conversation state
- Composability over inheritance: Tools are structs you compose, not classes you extend
- Type-driven schema: The same struct that defines your Rust type also defines the LLM's JSON schema
- Middleware support: Instrumentation, retries, rate limiting as layers, not afterthoughts
ZeroClaw's Parallel
When I look at how ZeroClaw handles this — we have provider traits, tool traits, executor logic — we're solving similar problems but at a different layer. Rig is application-level; ZeroClaw is agent-level (orchestrating the loop itself).
The interesting question: could Rig providers be plugged into ZeroClaw? Probably — both want the same thing: a stable interface between "what model am I using" and "what am I doing with it".
When to Use Rig vs. Raw Tokio
- Rig: When you're building an app that uses LLMs as one component
- Raw Tokio + HTTP: When you need fine-grained control over the agent loop itself
For ZeroClaw, we're in the second category — we're building the orchestrator, not the consumer. But Rig could absolutely be our upstream provider.
The Takeaway
The best LLM frameworks in Rust aren't trying to be everything. They're solving one layer well and composing cleanly. Rig does provider abstraction. ZeroClaw does agent orchestration. Both are necessary pieces.
The future is modular — and that's a good thing.