Most languages make concurrency painful. You worry about deadlocks, race conditions, corrupted state. You add locks, then deadlock. You remove locks, then race. It's a nightmare.

Rust doesn't eliminate concurrency complexity — network calls still hang, loops still infinite — but it eliminates data races. If your code compiles, two threads can't simultaneously modify the same data without synchronization.

That's a massive guarantee. Let's see how Rust delivers it.

Threads

Rust gives you threads directly:

use std::thread;
use std::time::Duration;

fn main() {
    let handle = thread::spawn(|| {
        for i in 1..10 {
            println!("Spawned thread: {}", i);
            thread::sleep(Duration::from_millis(100));
        }
    });
    
    for i in 1..5 {
        println!("Main thread: {}", i);
        thread::sleep(Duration::from_millis(100));
    }
    
    handle.join().unwrap();  // Wait for thread to finish
}

The thread::spawn takes a closure — that's your new thread's code. join() waits for it to finish.

But here's the critical part: the closure captures variables from the parent scope.

fn main() {
    let data = vec![1, 2, 3];
    
    let handle = thread::spawn(move || {
        println!("Got: {:?}", data);
    });
    
    // println!("{:?}", data);  // ERROR: data moved into thread
    
    handle.join().unwrap();
}

The move keyword transfers ownership to the new thread. Without it, you'd get a borrow-check error. With it, the thread owns that data. Clean, explicit, impossible to get wrong.

Message Passing

The safest concurrency model is don't share memory; share messages. Rust implements this with channels:

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();
    
    thread::spawn(move || {
        let msg = "Hello from thread!";
        tx.send(msg).unwrap();
    });
    
    let received = rx.recv().unwrap();
    println!("Got: {}", received);
}

mpsc = multiple producers, single consumer. You can have multiple threads sending, one thread receiving.

Here's where it gets interesting — sending data:

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();
    
    // Multiple producers
    let tx1 = tx.clone();
    thread::spawn(move || {
        tx1.send("From thread 1").unwrap();
    });
    
    let tx2 = tx.clone();
    thread::spawn(move || {
        tx2.send("From thread 2").unwrap();
    });
    
    // Drop original tx — we're done sending
    drop(tx);
    
    // Receive all messages
    for msg in rx {
        println!("Got: {}", msg);
    }
}

Ownership moves through the channel. The sending thread gives up ownership; the receiving thread gains it. No simultaneous access, no data races.

Shared State

Sometimes you really do need shared memory. Rust gives you Arc and Mutex:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];
    
    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }
    
    for handle in handles {
        handle.join().unwrap();
    }
    
    println!("Result: {}", *counter.lock().unwrap());
}

Two pieces:

The key insight: the lock is part of the type system. You can't access the data without acquiring the lock. And you can't forget to release it — the guard drops automatically when it goes out of scope.

Send and Sync

You might see these traits in error messages:

Most types are both. Primitive types (i32, bool, &str) are Send + Sync. Types containing Rc or RefCell are not — they're not thread-safe.

You rarely implement these manually. But understanding them explains why some types can't cross thread boundaries:

use std::rc::Rc;

fn main() {
    let data = Rc::new(vec![1, 2, 3]);
    // thread::spawn(move || { ... });  // ERROR: Rc is not Send
}

Rc uses reference counting that's not atomic — two threads could increment simultaneously and corrupt the count. Arc uses atomic operations, so it's safe.

The Big Picture

Rust's concurrency story is about compile-time guarantees:

| Problem | Traditional Languages | Rust | |---------|---------------------|------| | Data races | Runtime debugging | Compile error | | Dangling threads | Leak | Ownership ensures cleanup | | Deadlocks | Testing | Still possible, but explicit | | Race conditions | Heisenbugs | Still possible (logic) |

You can still write buggy concurrent code — deadlock is real, infinite loops are real, logic races are real. But data races are impossible. The compiler won't let you compile code where two threads write to the same location without synchronization.

That's huge. That's what "fearless concurrency" means.


Next up: Chapter 10 — Traits. The way to define shared behavior across types.