If you've ever waited four hours for a Rust project to compile, you've probably cursed at Serde. The serialization framework is everywhere — pulled in by nearly every JSON, YAML, or TOML crate — and all those generic instantiations add up. It's not unusual for Serde-related code to account for 30-40% of compile time in large projects.

What if you could have the same flexibility without the compile-time tax?

That's the question fasterthanli.me asked in early 2025, and his answer became facet — a reflection-based serialization library that recently achieved something surprising: with JIT compilation enabled, it now beats serde_json in performance.

The Problem: Serde's Compile-Time Tax

Serde works through code generation. When you derive Serialize or Deserialize on a type, Rust generates specialized code for that exact type. This is great for runtime performance — no reflection overhead — but it means every new type adds more code to compile.

For large codebases with hundreds of types, this multiplies quickly. The solution space people typically explore:

  1. Dynamic dispatch — Use dyn Trait to avoid monomorphization
  2. Smaller crates — Split up your codebase (doesn't help if dependencies pull in Serde anyway)
  3. Just wait — Hope future Rust versions improve compile times

fasterthanli.me tried option one first, with a project called merde — Serde-like traits that were dyn-compatible, enabling dynamic dispatch instead of monomorphizing everything.

The result? A "shittier version of erased_serde." The approach traded compile time for runtime performance — and lost on both fronts.

The Pivot: Reflection-Based Serialization

In March 2025, a new attempt emerged: facet. The core insight was elegant:

What if serialization was built on top of reflection, rather than reflection being built on top of serialization?

Instead of generating code that knows how to serialize each specific type, facet provides a Facet trait that gives you runtime information about your type's shape. Each type exposes a SHAPE associated constant containing:

With this shape information, you can serialize any type at runtime without generating specialized code at compile time.

// Serde: generates specialized serialize() for each type
impl Serialize for MyStruct {
    fn serialize(&self, serializer: S) -> Result<S::Ok, S::Error> {
        // Generated code knows exactly what to do
    }
}

// Facet: runtime shape inspection
let shape = <MyStruct as Facet>::SHAPE;
for field in shape.fields {
    let value = field.get(self)?;  // Reflective access
    serializer.serialize_field(field.name, value)?;
}

This approach dramatically reduces compile time — no code generation, just trait implementations. The tradeoff? Runtime performance drops significantly. Initial benchmarks showed facet-json was 5-7x slower than serde_json.

The Disappointment (and Refocus)

When facet was first announced, the reception was... honest. The compile-time gains were supposed to offset runtime costs, but measurements showed:

The reflection approach wasn't paying off as expected. Part of the problem: switching to facet didn't actually remove Serde from your dependency tree. If any transitive dependency pulled in Serde (tracing, serde_json internally, etc.), you were paying for both.

Rather than abandon the project, fasterthanli.me made an interesting choice: focus on developer experience instead of fighting on performance.

Features added:

The format crate matrix expanded: JSON, YAML, TOML, Postcard, MsgPack, XML, KDL, CSV, HTML, ASN.1 — all sharing the same error handling and reflection infrastructure.

The Comeback: JIT Compilation

Here's where things get interesting. In late 2025, a company porting a large proprietary codebase to facet encountered issues — including build times. This feedback prompted a re-examination of the original goal.

The solution: just-in-time compilation.

Using cranelift, facet now supports tiered JIT compilation:

The results? With JIT enabled, facet-json now beats serde_json in benchmarks. Not just matches — beats.

The catch: you're now depending on cranelift at runtime, generated code is nearly impossible to debug, and there are potential undefined behavior concerns. For many applications, the reflection-only path is perfectly fine.

But for cases where performance matters, the JIT path exists.

What This Teaches Us

The facet journey contains lessons for anyone building Rust ecosystem tooling:

  1. First attempts fail — merde was a useful learning experience, not a waste
  2. Feedback shapes direction — the large codebase port revealed real pain points
  3. DX is a valid differentiator — when performance plateaus, nicer error messages win users
  4. JIT in Rust is viable — cranelift integration shows the runtime compilation path works
  5. Ecosystem lock-in is real — you can't easily remove Serde when everything depends on it

The broader pattern: Rust's compile-time guarantees create pressure that drives innovation. People want faster builds badly enough to rebuild fundamental infrastructure. That's a sign of a healthy, ambitious ecosystem.

The Future

fasterthanli.me isn't done. He's now working on dodeca, a new static site generator, and continues expanding the facet ecosystem. The question of whether reflection-based serialization goes mainstream remains open — but facet proves it's viable.

For the rest of us, the takeaway might be simpler: if something in your Rust workflow feels slow, you're probably not alone. The ecosystem rewards people who fix it.


Has Serde compile times driven you to madness? The facet project shows there's always another path — even if the first few attempts don't work out.