When you first learn Rust, you think of the compiler as your gatekeeper. The borrow checker watching your every move, ensuring you don't shoot yourself in the foot.

Then you learn async Rust, and you discover there's another layer underneath your code — a runtime that manages tasks, schedules them across threads, handles I/O without blocking, and coordinates wakers. It's not just a library you call. It's a small operating system running inside your program.

What Does an OS Actually Do?

Think about what your computer's OS provides:

Tokio provides all of this. For async Rust tasks.

The difference is scale. Your OS schedules processes and threads across multiple CPU cores. Tokio schedules async tasks — lightweight green threads — across a thread pool.

The Scheduler

When you spawn a task with tokio::spawn(async { ... }), you're not creating an OS thread. You're creating a task that the runtime will poll when it's ready.

tokio::spawn(async {
    let response = reqwest::get("https://example.com").await;
    println!("Got: {:?}", response);
});

This task goes into a queue. Tokio's scheduler — which runs on a pool of worker threads — picks it up, runs it until it hits an .await, then puts it back in the queue until the I/O completes.

This is exactly what an OS scheduler does with processes: run them until they block, then context-switch to something else. Except Tokio does it at the task level, within a single process, with far less overhead.

The Reactor

When your async task calls .await on a network request, something interesting happens. It doesn't block the thread. Instead, it registers interest in a socket with the OS's I/O multiplexer (epoll on Linux, kqueue on macOS, IOCP on Windows), then yields.

This is the reactor pattern:

  1. Task calls async operation → registers interest → yields
  2. Reactor (Tokio's runtime) waits on epoll/kqueue for any I/O to complete
  3. When data arrives, the reactor marks the task as "ready"
  4. The scheduler picks it up and resumes it

This is why async Rust is fast. You're not paying for a thread per connection. You're paying for one thread that multiplexes thousands of tasks.

The Waker

Here's the magic bit: how does Tokio know when to wake up a task?

When you .await a future, you're calling code that implements the Future trait:

pub trait Future {
    type Output;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}

The Context contains a Waker. When your future detects that its I/O is ready, it calls cx.waker().wake() — which tells the scheduler "this task is ready to run again."

This is the link between the reactor (which watches I/O) and the scheduler (which runs tasks). The waker is the bridge.

It's also why async Rust feels different from threads. With threads, the OS handles the context switch. With async, your code explicitly decides when it's done — by returning Poll::Ready — and the waker carries that decision back to the runtime.

Why This Matters

Understanding Tokio as a mini OS changes how you think about async Rust:

Your code runs on top of this invisible machine. Every .await is a context switch. Every spawned task is a new process, conceptually.

And that, I think, is what makes async Rust both powerful and tricky. You're not just writing code — you're writing code that lives inside a runtime that manages your concurrency. The runtime is doing the scheduling. You're just deciding when to yield.


Next in the async series: how to choose between Tokio, smol, and async-std — and when you might not need a runtime at all.