Fundamentals of Asynchronous Programming in Rust

January 3, 2026

Part 1: Understanding the Problem Async Solves

Before we learn how to use async/await, we need to deeply understand why it exists. What problem does it solve? When would you actually need it?

The Fundamental Problem: Waiting

Programs often need to wait for things:

Here's the crucial insight: waiting is not the same as working. When your program waits for a network response, your CPU isn't doing calculations; it's literally doing nothing, just checking "is the data here yet? no? okay, check again..."

A Real-World Analogy: The Restaurant Kitchen

Imagine you're a chef in a restaurant kitchen. You have multiple orders to prepare:

The inefficient way (synchronous/blocking):

  1. Start cooking the steak
  2. Stand there watching the steak for 10 minutes
  3. Serve the steak
  4. Start making the salad
  5. Spend 2 minutes on it
  6. Serve the salad
  7. Start heating the soup
  8. Wait 5 minutes
  9. Serve the soup

Total time: 17 minutes. But here's the problem: while you were standing there watching the steak cook, you could have been making the salad! You spent most of your time waiting, not working.

The efficient way (asynchronous):

  1. Put the steak on the grill (it will cook on its own)
  2. Put the soup on to heat (it will heat on its own)
  3. While those are cooking, make the salad
  4. Serve the salad (done in 2 minutes!)
  5. Check the soup: it's ready! Serve it (done at 5 minutes)
  6. Check the steak: it's ready! Serve it (done at 10 minutes)

Total time: 10 minutes. Same amount of actual work, but much less waiting.

How This Applies to Programming

In programming terms:

Why Not Just Use Threads?

You might think: "I already learned about threads in Chapter 16! Can't I just spawn a thread for each task?"

Yes, you can! But threads have costs:

  1. Memory overhead: Each thread needs its own stack (typically 1-8 MB of memory)
  2. CPU overhead: Switching between threads takes time
  3. Complexity: Coordinating threads requires locks, channels, careful thinking about data races

For a web server handling 10,000 simultaneous connections:

Async is about efficiency: doing more with less resources.

When to Use Async vs Threads

This is a critical decision point that confuses many people. Here's how to think about it:

Use threads when:

Use async when:

The key question: Is my task mostly computing or mostly waiting?

Task Type Example Best Approach
CPU-bound Processing images, running simulations Threads
I/O-bound Web server handling requests Async
Mixed Download files then process them Async for downloads, threads for processing

Part 2: What is a Future?

Before we can understand async and await, we need to understand the concept of a Future.

The Concept: A Promise of a Value

A Future represents a value that doesn't exist yet, but will exist eventually.

Think of it like ordering a package online:

In Rust terms, a Future is something that will eventually produce a value, but might not have that value right now.

The Future Trait : A Peek Under the Hood

When we talk about "futures," we're talking about types that implement the Future trait:

pub trait Future {
    type Output;
    
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}

pub enum Poll<T> {
    Ready(T),    // Done! Here's the value
    Pending,     // Not ready yet, check back later
}

How It Works

When you .await a future, the runtime repeatedly calls poll:

The runtime doesn't busy-loop — when a future returns Pending, it registers a "waker" that notifies the runtime when to try again.

What async/await Compiles Into

When you write an async fn, Rust transforms it into a state machine struct that implements Future. Each .await becomes a state where the machine might pause (Pending) or continue (Ready).

You almost never implement Future manually, the compiler generates all this when you use async/await. But knowing this exists helps you understand why futures are lazy (the struct exists but poll hasn't been called) and why .await points are where task-switching can happen.

Futures Are Lazy

This is extremely important to understand: in Rust, Futures are lazy. They don't do anything until you explicitly ask them to make progress.

This is different from some other languages! In JavaScript, when you call an async function, it immediately starts running in the background. In Rust, when you call an async function, you just get back a Future that hasn't done anything yet.

Think of it like this:

This might seem like extra work, but it gives you more control. You can create futures, store them, combine them, and decide exactly when and how they run.


Part 3: The async and await Keywords

Now let's see how Rust lets you write asynchronous code.

Defining an Async Function

You mark a function as async by putting async before fn:

async fn fetch_data() -> String {
    // ... do something that might take a while ...
    String::from("Here's your data!")
}

What Does async Actually Do?

When you write async fn, Rust transforms your function. It doesn't return String directly; it returns something that will produce a String eventually.

Conceptually, this:

async fn fetch_data() -> String {
    String::from("data")
}

Becomes something like this (simplified):

fn fetch_data() -> impl Future<Output = String> {
    // Returns a Future that, when completed, gives you a String
}

The return type impl Future<Output = String> means "some type that implements the Future trait and will eventually produce a String."

The await Keyword

await is how you say "I need the value from this Future, and I'm willing to wait for it."

async fn do_something() {
    let data = fetch_data().await;
    println!("Got: {}", data);
}

Notice that Rust's await keyword goes after the expression you're awaiting, not before it. That is, it's a postfix keyword. This might differ from what you're used to if you've used async in other languages, but in Rust it makes chains of methods much nicer to work with:

// You can chain methods nicely
let text = fetch_url(url).await.text().await;

What .await Actually Does

The .await does several things:

  1. Starts executing the Future (remember, Futures are lazy!)
  2. If the Future isn't ready, yields control so other work can happen
  3. When the Future completes, gives you the actual value

The Critical Insight: await Is a Yield Point

When you write .await, you're saying: "If this isn't ready, let someone else use the CPU while I wait."

This is what makes async efficient! Instead of blocking the entire thread waiting for data, you're cooperatively sharing the thread with other tasks.

Think back to our chef analogy:


Part 4: The Async Runtime: The Missing Piece

Here's something that surprises many Rust newcomers: Rust doesn't include an async runtime in the standard library.

What Is a Runtime?

Remember when I said Futures are lazy? Something needs to actually drive them to completion. That something is called a runtime or executor.

The runtime's job is to:

  1. Keep track of all your Futures
  2. Check which ones can make progress
  3. Run the ones that are ready
  4. Efficiently wait when nothing can progress

Why Doesn't Rust Include One?

This is a deliberate design decision. Different use cases need different runtimes:

By not picking one, Rust lets you choose the best tool for your specific situation.

The trpl Crate

The Rust Book uses a teaching crate called trpl (short for "The Rust Programming Language") that wraps the popular Tokio runtime and the futures crate to keep things simpler for learning.

To use it, you'd add it to your Cargo.toml:

[dependencies]
trpl = { path = "../trpl" }

Understanding What the Runtime Does

Without a runtime, this code does nothing:

async fn greet() -> String {
    String::from("Hello!")
}

fn main() {
    let future = greet();  // Creates a Future, but doesn't run it!
    // The future just sits here, never executed
    // We never get "Hello!"
}

The Future is created but never driven to completion. It's like having a recipe but never cooking it.

block_on: Bridging Sync and Async

block_on is the bridge between synchronous and asynchronous worlds:

fn main() {
    trpl::block_on(async {
        // Inside here, we can use .await
        let greeting = greet().await;
        println!("{}", greeting);  // Prints "Hello!"
    });
}

Here's what block_on does:

  1. It takes an async block (a Future)
  2. It sets up a runtime internally
  3. It blocks the current thread until that Future completes
  4. It returns whatever the Future produces

Think of it like this: your main function is synchronous (regular Rust). But you want to run async code. block_on says "okay, I'll sit here and drive this async code to completion, blocking until it's done."

Why Can't main Be Async?

You might wonder: why not just make main async?

// This WON'T compile!
async fn main() {
    fetch_data().await;
}

The compiler will tell you: main function is not allowed to be async.

The reason is philosophical: something has to run the async code. Async code needs a runtime to drive it. But if main itself were async, what would drive main? You'd have a chicken-and-egg problem.

So main stays synchronous, and we use block_on to enter the async world.


Part 5: How Async Code Actually Executes: The Sequential Truth

This is a crucial concept that trips up many people: code within a single async block executes sequentially between await points.

The Misconception

People sometimes think that once you're in async land, everything magically runs concurrently. This is wrong!

trpl::block_on(async {
    println!("Step 1");
    do_thing_a().await;
    println!("Step 2");
    do_thing_b().await;
    println!("Step 3");
    do_thing_c().await;
    println!("Step 4");
});

This code runs exactly in order: Step 1, then A completes, then Step 2, then B completes, then Step 3, then C completes, then Step 4. There's no concurrency here at all!

Sequential Execution: One After Another

When you .await one thing after another, they happen in sequence:

async fn sequential_example() {
    println!("Starting first operation...");
    let result1 = slow_operation_one().await;   // Wait for this to finish
    
    println!("Starting second operation...");
    let result2 = slow_operation_two().await;   // Then wait for this
    
    println!("Starting third operation...");
    let result3 = slow_operation_three().await; // Then wait for this
    
    println!("All done!");
}

If each operation takes 1 second, this takes 3 seconds total. Each .await waits for completion before moving to the next line.

When Sequential Is What You Want

Sequential async is appropriate when:

Example: Logging into a service, then fetching your profile:

async fn get_user_profile(username: &str, password: &str) -> Profile {
    // Must log in first to get a token
    let token = login(username, password).await;
    
    // Can only fetch profile after we have the token
    let profile = fetch_profile(&token).await;
    
    profile
}

You must wait for login to complete before you can fetch the profile. Sequential makes sense here.

The Problem: Unnecessary Waiting

But what if operations don't depend on each other?

async fn fetch_dashboard_data() {
    // These don't depend on each other!
    let weather = fetch_weather().await;        // Takes 500ms
    let news = fetch_news().await;              // Takes 300ms  
    let stock_prices = fetch_stocks().await;    // Takes 400ms
    
    display_dashboard(weather, news, stock_prices);
}

Total time: 1200ms (500 + 300 + 400)

But wait: why are we fetching news only after weather arrives? These are independent! We're wasting time.

Where Does Concurrency Come From Then?

Concurrency only happens when you have multiple futures being driven at the same time. A single chain of awaits is just sequential code that can pause.

The Key Insight

Within an async block: sequential execution, just like regular code.

Between async blocks/futures being joined or selected: concurrent execution.

Await points are where the runtime can switch to other work, but only if there is other work to switch to!


Part 6: Concurrent Execution: Doing Things "At the Same Time"

When operations don't depend on each other, we can run them concurrently: starting them all and waiting for all to complete.

The Concept: Concurrent vs Parallel

These terms are often confused, so let's be precise:

An analogy:

Async is primarily about concurrency. You might have only one thread, but it can juggle many tasks by switching between them at await points.

join: Waiting for Multiple Futures Together

The trpl::join function runs multiple Futures concurrently and waits for all of them to complete:

fn main() {
    trpl::block_on(async {
        let fut1 = async {
            trpl::sleep(Duration::from_millis(500)).await;
            "first"
        };
        
        let fut2 = async {
            trpl::sleep(Duration::from_millis(300)).await;
            "second"
        };
        
        // Run both concurrently, wait for both to complete
        let (result1, result2) = trpl::join(fut1, fut2).await;
        
        println!("{}, {}", result1, result2);
    });
}

Now what happens:

  1. Both futures start "at the same time"
  2. While fut1 is sleeping, fut2 can make progress
  3. While both are sleeping, the runtime can do other work
  4. When BOTH complete, we continue

Total time: ~500ms (the slowest one), not 800ms!

How join Actually Works

join doesn't use multiple threads. Here's what really happens:

  1. Both Futures are created
  2. The runtime polls both of them
  3. When one is waiting (e.g., sleeping), the runtime checks the other
  4. The runtime efficiently waits when both are blocked
  5. As each completes, its result is stored
  6. When all are complete, join returns all results together

It's like our efficient chef: start the steak, start the soup, then check back on both.

Understanding the Return Type of join

join returns a tuple with all the results in order:

async fn example() {
    // If these return String and i32 respectively:
    let (a, b) = trpl::join(
        returns_string(),   // a: String
        returns_i32(),      // b: i32
    ).await;
}

The join! Macro for More Futures

When you have more than two futures, use the join! macro:

fn main() {
    trpl::block_on(async {
        let (weather, news, stocks) = trpl::join!(
            fetch_weather(),
            fetch_news(),
            fetch_stocks()
        );
        
        display_dashboard(weather, news, stocks);
    });
}

When to Use join

Use join when:

Examples:

Fairness in join

The trpl::join function is fair: it checks each future equally often, alternating between them, and never lets one race ahead if the other is ready.

This means futures make progress in a predictable, interleaved fashion. But this isn't guaranteed by all runtimes or all joining strategies! Some might let one future "race ahead." The exact behavior depends on the runtime implementation.


Part 7: A Common Mistake: The Message Channel Example

Let's see how sequential vs concurrent execution plays out with a realistic example. Imagine sending messages with delays:

fn main() {
    trpl::block_on(async {
        let (tx, mut rx) = trpl::channel();
        
        // Send messages with delays
        let messages = vec!["hi", "from", "the", "future"];
        for msg in messages {
            tx.send(msg).unwrap();
            trpl::sleep(Duration::from_millis(500)).await;
        }
        
        // Receive messages
        while let Some(message) = rx.recv().await {
            println!("Got: {}", message);
        }
    });
}

Question: When do the messages get printed?

Answer: They DON'T get printed at 500ms intervals! They all get printed at once, 2 seconds after the program starts.

Why? Because this is ONE async block. The code runs sequentially:

  1. Send "hi", sleep 500ms
  2. Send "from", sleep 500ms
  3. Send "the", sleep 500ms
  4. Send "future", sleep 500ms
  5. NOW the while loop starts
  6. Print all four messages immediately (they're already in the channel)

The sleeps all happened, then the receives all happened. No interleaving!

The Fix: Separate Async Blocks

To get true concurrency, you need separate futures:

fn main() {
    trpl::block_on(async {
        let (tx, mut rx) = trpl::channel();
        
        // Sending future
        let send_future = async {
            let messages = vec!["hi", "from", "the", "future"];
            for msg in messages {
                tx.send(msg).unwrap();
                trpl::sleep(Duration::from_millis(500)).await;
            }
        };
        
        // Receiving future
        let receive_future = async {
            while let Some(message) = rx.recv().await {
                println!("Got: {}", message);
            }
        };
        
        // Run both concurrently!
        trpl::join(send_future, receive_future).await;
    });
}

Now the sender and receiver run concurrently. Messages print at 500ms intervals because the receive loop can run while the send loop is sleeping.


Part 8: async move: Taking Ownership

Just like closures, async blocks can either borrow or take ownership of variables from their environment.

The Problem: Borrowing Doesn't Always Work

fn main() {
    trpl::block_on(async {
        let (tx, mut rx) = trpl::channel();
        
        let send_future = async {
            tx.send("hello").unwrap();
            // tx is BORROWED here
        };
        
        let receive_future = async {
            while let Some(msg) = rx.recv().await {
                println!("{}", msg);
            }
        };
        
        trpl::join(send_future, receive_future).await;
        // Problem: receive_future never ends because tx is still alive!
    });
}

This program hangs forever. Why?

  1. The channel receiver (rx.recv()) returns None only when the channel is closed
  2. The channel closes when all senders are dropped
  3. tx is borrowed by send_future, but the borrow ends when send_future completes
  4. But tx itself isn't dropped; it's still owned by the outer scope!
  5. So the channel never closes, so rx.recv() keeps waiting forever

The Solution: async move

fn main() {
    trpl::block_on(async {
        let (tx, mut rx) = trpl::channel();
        
        let send_future = async move {  // <-- move!
            tx.send("hello").unwrap();
            // tx is MOVED into this block and dropped when it ends
        };
        
        let receive_future = async {
            while let Some(msg) = rx.recv().await {
                println!("{}", msg);
            }
        };
        
        trpl::join(send_future, receive_future).await;
    });
}

Now tx is moved into the async block. When send_future completes, tx is dropped, the channel closes, rx.recv() returns None, and the program exits cleanly.

When to Use async move

Use async move when:

This is exactly analogous to move closures from Chapter 13, just applied to async blocks.


Part 9: Spawning Tasks: True Independence

So far, we've been running futures within a single task. But sometimes you want to spawn a completely independent task that runs on its own.

spawn_task: Fire and Forget (Almost)

fn main() {
    trpl::block_on(async {
        trpl::spawn_task(async {
            for i in 1..10 {
                println!("Spawned task: {}", i);
                trpl::sleep(Duration::from_millis(500)).await;
            }
        });
        
        for i in 1..5 {
            println!("Main task: {}", i);
            trpl::sleep(Duration::from_millis(500)).await;
        }
    });
}

spawn_task creates a new task that runs independently. It's similar to spawning a thread, but it's managed by the async runtime instead of the OS.

The Catch: Tasks Get Cancelled

There's an important behavior: when block_on finishes (when its async block completes), any spawned tasks are cancelled!

In the example above, the main task counts to 4, then exits. The spawned task only gets to run until that point, even though it was trying to count to 9.

Join Handles: Waiting for Spawned Tasks

If you need to wait for a spawned task, use its join handle:

fn main() {
    trpl::block_on(async {
        let handle = trpl::spawn_task(async {
            for i in 1..10 {
                println!("Spawned task: {}", i);
                trpl::sleep(Duration::from_millis(500)).await;
            }
            "spawned task done"
        });
        
        for i in 1..5 {
            println!("Main task: {}", i);
            trpl::sleep(Duration::from_millis(500)).await;
        }
        
        // Wait for the spawned task to complete
        let result = handle.await.unwrap();
        println!("{}", result);
    });
}

Now the main task waits for the spawned task before exiting, so all 9 iterations complete.

spawn_task vs join: When to Use Which

Approach Use When
join(a, b) You need both results and want to wait for both
spawn_task You want the task to be independent, possibly outliving the current scope
spawn_task + handle.await You want independence but still need to wait eventually

Part 10: Racing Futures: First One Wins

Sometimes you don't need all results; you just need one result, whichever comes first.

The Concept: Racing

Imagine you need to get some data, and you have two sources:

You could try the fast one, wait, and if it fails, try the slow one. But that wastes time! Better approach: try both simultaneously, use whichever responds first.

select: First Future to Complete Wins

fn main() {
    trpl::block_on(async {
        let slow = async {
            trpl::sleep(Duration::from_secs(5)).await;
            "slow finished"
        };
        
        let fast = async {
            trpl::sleep(Duration::from_secs(1)).await;
            "fast finished"
        };
        
        let result = trpl::select(slow, fast).await;
        
        // result tells us which one finished first
        match result {
            trpl::Either::Left(value) => println!("Slow won: {}", value),
            trpl::Either::Right(value) => println!("Fast won: {}", value),
        }
    });
}

trpl::select takes two futures and returns whichever completes first.

The Either Type

The return type is Either: an enum that tells you which future won:

This lets you know which future won, not just the result.

Building Timeouts

One of the most useful applications is timeouts:

async fn fetch_with_timeout(url: &str) -> Result<String, &'static str> {
    let fetch = fetch_url(url);
    let timeout = trpl::sleep(Duration::from_secs(10));
    
    match trpl::select(fetch, timeout).await {
        trpl::Either::Left(result) => Ok(result),
        trpl::Either::Right(_) => Err("Request timed out"),
    }
}

If fetch_url completes within 10 seconds, we get Either::Left with the result. If the sleep completes first, we get Either::Right and return an error.

What Happens to the "Losing" Future?

When one future wins, the other is dropped and never completes. This means:

This is usually fine for timeouts (we don't care about finishing a timed-out request), but be aware if your futures have important side effects.

When to Use select

Use select when:


Part 11: Working With Any Number of Futures

So far we've used join with a fixed number of futures. But what if you have a dynamic number?

The Problem

// This works - fixed number
let (a, b, c) = trpl::join!(future1, future2, future3);

// But what about this?
let futures: Vec<SomeFuture> = get_futures();
// How do we join all of them?

join_all: Joining a Collection

fn main() {
    trpl::block_on(async {
        let urls = vec![
            "https://example.com/a",
            "https://example.com/b", 
            "https://example.com/c",
        ];
        
        // Create a future for each URL
        let futures: Vec<_> = urls
            .iter()
            .map(|url| fetch_url(url))
            .collect();
        
        // Wait for all of them
        let results: Vec<String> = trpl::join_all(futures).await;
        
        for result in results {
            println!("{}", result);
        }
    });
}

join_all takes any iterator of futures and returns a future that completes when ALL of them complete, giving you a Vec of all the results.

The Complexity: Different Future Types

Here's where things get tricky. What if your futures have different types?

let future1 = async { 42 };           // Future<Output = i32>
let future2 = async { "hello" };      // Future<Output = &str>

// Can't put these in a Vec together!

join_all needs all futures to be the same type. For different types, you need the join! macro (fixed number) or trait objects.

Trait Objects for Heterogeneous Futures

use std::pin::Pin;
use std::future::Future;

fn main() {
    trpl::block_on(async {
        let futures: Vec<Pin<Box<dyn Future<Output = ()>>>> = vec![
            Box::pin(async { println!("first"); }),
            Box::pin(async { println!("second"); }),
            Box::pin(async { 
                trpl::sleep(Duration::from_secs(1)).await;
                println!("third"); 
            }),
        ];

        trpl::join_all(futures).await;
    });
}

The Box::pin wraps each future in a pinned box, and dyn Future<Output = ()> is a trait object that can hold any future with that output type.

This is where Pin becomes practically important: you need it to put futures in collections.


Part 12: Yielding and Fairness

The Starvation Problem

What happens if a future does a lot of work without any await points?

fn main() {
    trpl::block_on(async {
        let compute_heavy = async {
            // No await points!
            for i in 0..1_000_000_000 {
                // Heavy computation
            }
            "done computing"
        };
        
        let other_work = async {
            println!("I never get to run until compute_heavy finishes!");
            "other done"
        };
        
        trpl::join(compute_heavy, other_work).await;
    });
}

Even though we're using join, other_work doesn't get to run until compute_heavy finishes its billion iterations. Why?

Because the runtime can only switch tasks at await points!

Between await points, code runs synchronously. If you don't await, you don't yield control.

The Solution: Explicit Yielding

For CPU-heavy work that needs to play nice with other tasks:

let compute_heavy = async {
    for i in 0..1_000_000 {
        // Do some work
        heavy_computation();
        
        // Periodically yield control
        if i % 1000 == 0 {
            trpl::yield_now().await;
        }
    }
};

yield_now() is an async function that immediately completes but gives the runtime a chance to switch to other tasks.

The Key Insight About Yield Points

Every .await is a potential pause point. The runtime looks at each await as an opportunity to:

  1. Check if other futures can make progress
  2. Switch to a different task if needed
  3. Come back later when this future is ready

If you never await, you never give the runtime this opportunity!


Part 13: Handling Errors in Async Code

Async code can fail just like sync code. Let's understand how errors work with async/await.

async Functions That Return Result

Async functions can return Result just like regular functions:

async fn fetch_user(id: u64) -> Result<User, FetchError> {
    // Might succeed with a User
    // Might fail with a FetchError
}

Using ? in Async Functions

The ? operator works in async functions exactly like in sync functions:

async fn process_user(id: u64) -> Result<String, FetchError> {
    let user = fetch_user(id).await?;  // If error, return early
    let profile = fetch_profile(&user).await?;  // If error, return early
    Ok(format!("User {} has profile {}", user.name, profile.bio))
}

The ? after .await is applied to the Result that the Future produces, not to the Future itself.

Error Handling with join

Here's something important: join always waits for ALL Futures to complete, even if some fail.

async fn fetch_all() {
    let (result1, result2) = trpl::join(
        might_fail_1(),  // Returns Result<A, Error>
        might_fail_2()   // Returns Result<B, Error>
    ).await;
    
    // result1 and result2 are both Results
    // We need to handle them individually
    match (result1, result2) {
        (Ok(a), Ok(b)) => println!("Both succeeded!"),
        (Err(e), Ok(_)) => println!("First failed: {:?}", e),
        (Ok(_), Err(e)) => println!("Second failed: {:?}", e),
        (Err(e1), Err(e2)) => println!("Both failed!"),
    }
}

Part 14: Async Blocks

Besides async functions, you can create async blocks: anonymous chunks of async code.

The Syntax

let my_future = async {
    // async code here
    do_something().await;
    42  // The "return value" of this async block
};

An async block creates a Future that you can store in a variable, pass to functions, or await later.

When Async Blocks Are Useful

Capturing local variables:

async fn process(data: Vec<String>) {
    let processed = async {
        // This block captures `data` from the outer scope
        let mut results = Vec::new();
        for item in &data {
            results.push(transform(item).await);
        }
        results
    };
    
    let results = processed.await;
}

Creating Futures to pass to join or select:

fn main() {
    trpl::block_on(async {
        let future1 = async {
            trpl::sleep(Duration::from_secs(1)).await;
            "first"
        };
        
        let future2 = async {
            trpl::sleep(Duration::from_secs(2)).await;
            "second"
        };
        
        let (r1, r2) = trpl::join(future1, future2).await;
    });
}

Part 15: Streams: Async Iterators

In synchronous Rust, you have iterators that give you values one at a time. In async Rust, you have streams: the async equivalent.

The Concept

An iterator gives you values synchronously:

for item in vec![1, 2, 3] {
    process(item);
}

A stream gives you values asynchronously: each value might take time to arrive:

while let Some(item) = stream.next().await {
    process(item);
}

Real-World Examples of Streams

Streams are natural for:

Why Not Just Return a Vec?

You might wonder: why not just await a Vec of all items?

// Option 1: Get everything at once
let all_messages: Vec<Message> = get_all_messages().await;

// Option 2: Stream them one by one
while let Some(message) = message_stream.next().await {
    process(message);
}

Streams are better when:

Creating Streams from Iterators

The simplest way to get a stream is from an existing iterator:

use trpl::StreamExt;

fn main() {
    trpl::block_on(async {
        let data = vec![1, 2, 3, 4, 5];
        let mut stream = trpl::stream_from_iter(data);
        
        while let Some(num) = stream.next().await {
            println!("Got: {}", num);
        }
    });
}

The StreamExt Trait

To get the next() method (and other helpful methods), you need StreamExt:

use trpl::StreamExt;

This is similar to how Iterator has methods like map, filter, etc.; StreamExt provides async-aware versions:

use trpl::StreamExt;

fn main() {
    trpl::block_on(async {
        let numbers = trpl::stream_from_iter(vec![1, 2, 3, 4, 5, 6]);
        
        let mut doubled_evens = numbers
            .filter(|n| n % 2 == 0)  // Keep even numbers
            .map(|n| n * 2);         // Double them
        
        while let Some(num) = doubled_evens.next().await {
            println!("{}", num);  // 4, 8, 12
        }
    });
}

Streams from Channels

Async channels naturally produce streams:

fn main() {
    trpl::block_on(async {
        let (tx, rx) = trpl::channel();
        
        // Sender
        let sender = async move {
            for i in 1..=5 {
                tx.send(i).unwrap();
                trpl::sleep(Duration::from_millis(100)).await;
            }
        };
        
        // Receiver - rx is like a stream!
        let receiver = async {
            let mut rx = rx;
            while let Some(num) = rx.recv().await {
                println!("Received: {}", num);
            }
        };
        
        trpl::join(sender, receiver).await;
    });
}

The receiver channel (rx) gives you values as they arrive, one at a time: that's a stream!

Merging Streams

What if you have multiple streams and want to process items from any of them as they arrive?

use trpl::StreamExt;

fn main() {
    trpl::block_on(async {
        let stream1 = trpl::stream_from_iter(vec![1, 2, 3]);
        let stream2 = trpl::stream_from_iter(vec![10, 20, 30]);
        
        let mut merged = stream1.merge(stream2);
        
        while let Some(num) = merged.next().await {
            println!("Got: {}", num);
        }
    });
}

Items from both streams are interleaved as they become available.


Part 16: Pin and Unpin: Making Self-Referential Futures Safe

This is the trickiest part of async Rust. Let me explain it step by step.

Why Pin Exists: Self-Referential Structs

When Rust compiles an async block, it creates a struct (a state machine) that holds all the local variables needed to resume execution. Consider:

async fn example() {
    let data = String::from("hello");
    let slice = &data[0..2];  // slice references data
    
    some_async_thing().await;
    
    println!("{}", slice);  // use slice after await
}

The compiler creates something like:

struct ExampleFuture {
    data: String,
    slice: /* reference to data */,  // Points INTO this same struct!
    state: State,
}

This is a self-referential struct: slice points to data which is in the same struct.

The Problem With Moving

Normally in Rust, you can move values freely. But what happens if we move this struct?

Before move:           After move:
┌─────────────┐        ┌─────────────┐
│ data: "hi"  │◄─┐     │ data: "hi"  │
│ slice: ─────┼──┘     │ slice: ─────┼──┐
└─────────────┘        └─────────────┘  │
   Address: 100           Address: 200  │
                                        │
                          ┌─────────────┘
                          ▼
                    Address: 100  ← INVALID!

After the move, slice still points to address 100, but data is now at address 200. slice is a dangling pointer!

Pin to the Rescue

Pin is a wrapper that prevents moving. Once a value is pinned, it cannot be moved to a different memory address.

use std::pin::Pin;

let future = async { /* ... */ };
let pinned: Pin<Box<...>> = Box::pin(future);

// The future inside cannot be moved anymore
// Its internal references are safe

The Unpin Trait

Here's the twist: most types in Rust don't have self-references. For them, moving is perfectly safe. These types implement Unpin.

If a type is Unpin, pinning it has no effect: you can still move it. Pin only restricts movement for !Unpin types.

Why You Usually Don't Need to Think About This

The good news: most of the time, you don't need to worry about pinning!

The async/await machinery handles it for you. When you write:

my_future.await

Rust takes care of pinning internally.

When You DO Encounter Pin

You might see Pin in:

Scenario 1: Storing futures in a collection

// Won't compile without Pin!
let futures: Vec<Pin<Box<dyn Future<Output = ()>>>> = vec![
    Box::pin(async { /* ... */ }),
    Box::pin(async { /* ... */ }),
];

trpl::join_all(futures).await;

Scenario 2: Returning futures from functions

fn make_future() -> Pin<Box<dyn Future<Output = i32>>> {
    Box::pin(async {
        42
    })
}

The Practical Takeaway

  1. Most of the time: .await handles pinning for you. Don't worry about it.

  2. When you need dynamic futures: Use Box::pin() to create Pin<Box<dyn Future<...>>>

  3. If you see Pin errors: The compiler is telling you a future might be moved unsafely. Box::pin() usually fixes it.

  4. Advanced cases: If you're implementing Future manually or doing complex async patterns, you'll need to understand Pin deeply. But that's beyond beginner scope.


Part 17: Futures, Tasks, and Threads: The Full Picture

Let's understand how all these concepts relate to each other.

The Hierarchy

┌─────────────────────────────────────────┐
│              THREADS                    │
│  (OS-managed, true parallelism)         │
│                                         │
│  ┌─────────────────────────────────┐    │
│  │           TASKS                 │    │
│  │  (Runtime-managed, concurrent)  │    │
│  │                                 │    │
│  │  ┌─────────────────────────┐    │    │
│  │  │       FUTURES           │    │    │
│  │  │  (Individual async ops) │    │    │
│  │  └─────────────────────────┘    │    │
│  └─────────────────────────────────┘    │
└─────────────────────────────────────────┘

How They Work Together

A typical async runtime:

  1. Has a thread pool (say, 4 threads for 4 CPU cores)
  2. Manages many tasks (could be thousands)
  3. Each task contains futures
  4. Tasks are distributed across threads
  5. When a future awaits, the runtime can switch to another task on that thread

When to Use Each

Use futures (within a single task) when:

Use tasks (spawn_task) when:

Use threads when:

Combining Them

Real programs often combine all three:

use std::thread;

fn main() {
    // Spawn a thread for CPU-heavy work
    let compute_handle = thread::spawn(|| {
        expensive_computation()
    });
    
    // Async runtime for I/O work
    trpl::block_on(async {
        // Spawn tasks for independent async work
        let task1 = trpl::spawn_task(async {
            fetch_from_api_a().await
        });
        
        let task2 = trpl::spawn_task(async {
            fetch_from_api_b().await
        });
        
        // Join futures within this task
        let (result1, result2) = trpl::join(
            task1,
            task2
        ).await;
        
        // Wait for the CPU thread
        let compute_result = compute_handle.join().unwrap();
        
        combine_results(result1.unwrap(), result2.unwrap(), compute_result)
    });
}

The Key Insight

Async doesn't replace threads; they complement each other.


Part 18: When NOT to Use Async

Async isn't always the answer. Let's be clear about when to avoid it.

CPU-Bound Work

Async doesn't help with CPU-bound work. If you're doing calculations, you're not waiting; you're working. Async is about efficient waiting.

// DON'T do this
async fn calculate_fibonacci(n: u64) -> u64 {
    // No await points here!
    // This is pure CPU work
    // Async adds overhead with no benefit
    fib(n)
}

For CPU-bound work, use threads or keep it synchronous.

Simple, One-Off Operations

If you're just doing one thing and waiting for it, async adds complexity without benefit:

// Overkill for a simple script
fn main() {
    trpl::block_on(async {
        let contents = async_read_file("file.txt").await;
        println!("{}", contents);
    });
}

// Simpler, just use sync
fn main() {
    let contents = std::fs::read_to_string("file.txt").unwrap();
    println!("{}", contents);
}

When Simplicity Wins

Async code is more complex than sync code. It has:

If you're not handling many concurrent operations, sync code is simpler and that simplicity has value.


Tokio in the Real World (some extra)

The trpl Crate vs Real-World Code

The Rust Book uses trpl to simplify learning. But in production code, you'll typically use Tokio directly.

Here's the relationship:

In the Book (trpl) In Real Code (Tokio)
trpl::block_on(async { ... }) #[tokio::main] attribute
trpl::spawn_task() tokio::spawn()
trpl::sleep() tokio::time::sleep()
trpl::join() tokio::join! macro
trpl::select() tokio::select! macro
trpl::channel() tokio::sync::mpsc::channel()

Setting Up Tokio

In your Cargo.toml:

[dependencies]
tokio = { version = "1", features = ["full"] }

The #[tokio::main] Attribute

Instead of block_on, Tokio provides a convenient attribute:

// What you write:
#[tokio::main]
async fn main() {
    let data = fetch_data().await;
    println!("{}", data);
}

// What the compiler transforms it into (roughly):
fn main() {
    tokio::runtime::Runtime::new()
        .unwrap()
        .block_on(async {
            let data = fetch_data().await;
            println!("{}", data);
        })
}

The #[tokio::main] attribute is just syntactic sugar that:

  1. Creates a Tokio runtime
  2. Calls block_on with your async main function
  3. Handles all the boilerplate for you

A Real Tokio Example

use tokio::time::{sleep, Duration};

#[tokio::main]
async fn main() {
    let task1 = tokio::spawn(async {
        sleep(Duration::from_secs(1)).await;
        println!("Task 1 done");
        42
    });
    
    let task2 = tokio::spawn(async {
        sleep(Duration::from_millis(500)).await;
        println!("Task 2 done");
        "hello"
    });
    
    // Wait for both
    let (result1, result2) = tokio::join!(task1, task2);
    
    println!("Results: {:?}, {:?}", result1.unwrap(), result2.unwrap());
}

Why the Book Uses trpl Instead

The trpl crate:

Once you understand async with trpl, switching to Tokio is straightforward — the concepts are identical, just the function names differ slightly.


Summary: The Complete Mental Model

Let me tie everything together:

Futures are lazy recipes for producing values. They do nothing until polled.

Async blocks create futures. Code inside runs sequentially between await points.

.await polls a future and yields control if not ready. It's the cooperation point.

Runtimes (trpl::block_on) drive futures by polling them repeatedly until done.

join runs multiple futures concurrently, completing when ALL finish.

select runs multiple futures concurrently, completing when ANY ONE finishes (returning Either).

spawn_task creates an independent task that runs separately from the current task.

join_all handles dynamic numbers of futures (needs Pin<Box<...>> for trait objects).

async move takes ownership of captured variables (important for channels and lifetimes).

Streams are async iterators: values arrive over time, processed with while let loops.

Pin prevents futures from moving in memory, making self-referential structs safe.

Tasks, futures, and threads are complementary: futures for individual operations, tasks for independent concurrent work, threads for CPU parallelism.

The key insight: Async is about efficient waiting. When your program spends more time waiting than working, async lets you do more with less resources. But concurrency only happens when you explicitly create multiple futures and join/select them; a single chain of awaits is still sequential!

This chapter is foundational for participating in the Rust ecosystem. Once you're comfortable with these concepts, you'll be able to share your work with the world and manage larger, more complex projects.


Async and Await Exercises

Here are exercises to build your async muscle memory. They progress from basic concepts to more complex patterns.


Exercise 1: Your First Async Function

Create an async function called greet that takes a name: &str parameter and returns a String with the greeting "Hello, {name}!".

In main, use trpl::block_on to run an async block that:

  1. Calls greet with your name
  2. Prints the result

This exercise just gets you comfortable with the basic syntax.


Exercise 2: Sequential Awaits

Create three async functions:

In your async main block, call all three sequentially (one after another) and print the sum of their results.

Question to answer: How long does the total execution take? Why?

Hint: Use trpl::sleep(Duration::from_millis(500)) for sleeping.


Exercise 3: Concurrent with join

Using the same three functions from Exercise 2, modify your main to run all three concurrently using trpl::join!.

Print the sum of results just like before.

Question to answer: How long does the total execution take now? Why is it different from Exercise 2?


Exercise 4: Understanding Sequential Within Async Blocks

This code has a bug in its logic. The programmer expects messages to print at 200ms intervals, but they all print at once at the end:

fn main() {
    trpl::block_on(async {
        let messages = vec!["First", "Second", "Third", "Fourth"];
        
        for msg in messages {
            trpl::sleep(Duration::from_millis(200)).await;
            println!("{}", msg);
        }
        
        println!("---");
        println!("Now reading messages back:");
        
        // Programmer expected interleaved output but got all at once
    });
}

Wait: this code actually DOES print at intervals! The bug description was wrong.

Let me give you the REAL buggy scenario. Here's code where the programmer wants sending and receiving to happen concurrently, but it doesn't:

fn main() {
    trpl::block_on(async {
        let (tx, mut rx) = trpl::channel();
        
        let messages = vec!["First", "Second", "Third", "Fourth"];
        for msg in messages {
            tx.send(msg).unwrap();
            trpl::sleep(Duration::from_millis(200)).await;
        }
        
        while let Some(msg) = rx.recv().await {
            println!("Received: {}", msg);
        }
    });
}
  1. Explain why all messages are received at once (after all sends complete) rather than interleaved
  2. Fix the code so sending and receiving happen concurrently (messages print as they're sent)
  3. Make sure the program terminates properly (hint: you'll need async move)

Exercise 5: Racing with select

Create an async function slow_server() that sleeps for 3 seconds then returns "Response from slow server".

Create an async function fast_server() that sleeps for 1 second then returns "Response from fast server".

In your main:

  1. Race both servers using trpl::select
  2. Print which server responded and what the response was
  3. Use the Either type to handle both cases

Expected output: The fast server should win, and you should see output after ~1 second, not 3.


Exercise 6: Implementing a Timeout

Create an async function unreliable_fetch() that sleeps for a random-ish duration (use 4 seconds to simulate a slow response) and returns "Data retrieved!".

Write a function fetch_with_timeout that:

  1. Takes a duration as the timeout limit
  2. Races unreliable_fetch() against trpl::sleep(timeout)
  3. Returns Ok(String) if the fetch completes in time
  4. Returns Err("Timeout!") if the timeout wins

Test it with a 2-second timeout (should fail) and a 5-second timeout (should succeed).


Exercise 7: spawn_task and Join Handles

Create a program that:

  1. Spawns a task that counts from 1 to 10, printing each number with a 300ms delay
  2. In the main task, counts from 1 to 5, printing each number with a 500ms delay
  3. Waits for the spawned task to complete before exiting

You should see interleaved output from both counters. The spawned task should complete all 10 counts (not get cut off).

Hint: Save the join handle and .await it at the end.


Exercise 8: Multiple Producers

Create a channel and spawn three separate tasks that each send messages:

In the main task, receive and print all messages as they arrive.

Make sure the program terminates properly (all senders must be dropped).

Hint: You'll need to clone the sender for each task, and use async move blocks.


Exercise 9: Working with Streams

Create a stream from the iterator 1..=10.

Use stream combinators to:

  1. Filter to keep only numbers divisible by 3
  2. Map each number to its square

Consume the stream with a while let loop, printing each value.

Expected output: 9, 36, 81 (which are 3², 6², 9²)

Hint: You'll need use trpl::StreamExt;


Exercise 10: Dynamic Futures with join_all

Write a function fetch_page(id: u32) that:

In main:

  1. Create a Vec of page IDs: vec![1, 2, 3, 4, 5]
  2. Map over this vec to create a vec of futures
  3. Use trpl::join_all to await all of them concurrently
  4. Print all the results

Question to answer: If run sequentially, this would take 100+200+300+400+500 = 1500ms. How long does it take with join_all? Why?


Exercise 11: The Starvation Problem

This code has a starvation problem:

fn main() {
    trpl::block_on(async {
        let compute = async {
            let mut sum: u64 = 0;
            for i in 0..1_000_000 {
                sum += i;
            }
            sum
        };
        
        let printer = async {
            for i in 1..=5 {
                println!("Printer: {}", i);
                trpl::sleep(Duration::from_millis(10)).await;
            }
        };
        
        let (result, _) = trpl::join(compute, printer).await;
        println!("Sum: {}", result);
    });
}
  1. Run this code and observe: does the printer output interleave with the computation, or does all printing happen after the sum is computed?
  2. Explain why this happens
  3. Fix the compute future so that printer gets chances to run during the computation

Hint: Use trpl::yield_now() periodically.


Exercise 12: Combining Async and Threads

Create a program that:

  1. Spawns an OS thread (using std::thread::spawn) that does CPU-heavy work: compute the sum of 1 to 10_000_000
  2. Meanwhile, in an async runtime, fetch "data" from three simulated async sources concurrently (each just sleeps for 500ms and returns a string)
  3. Wait for both the thread and the async operations to complete
  4. Print all results

This exercises the pattern of using threads for CPU work and async for I/O work.

Hint:


Exercise 13: Building an Async Timeout Wrapper (Challenge)

Create a generic timeout wrapper function:

async fn with_timeout<T>(
    future: impl Future<Output = T>,
    timeout_ms: u64
) -> Result<T, &'static str>

This function should:

Test it with:

  1. A fast operation (100ms) with a 500ms timeout → should succeed
  2. A slow operation (1000ms) with a 200ms timeout → should fail

Hint: Use trpl::select internally.


Exercise 14: Message Processor with Graceful Shutdown (Challenge)

Build a message processing system:

  1. Create a channel for sending "jobs" (just strings)
  2. Spawn a worker task that:
    • Receives jobs from the channel
    • "Processes" each job by sleeping 100ms and printing "Processed: {job}"
    • Exits cleanly when the channel closes
  3. In the main task:
    • Send 5 jobs: "job1" through "job5"
    • After sending all jobs, close the channel (drop the sender)
    • Wait for the worker to finish
    • Print "All jobs completed!"

The key learning: how channel closure signals the worker to exit.


Good luck! These exercises build on each other, so if you get stuck on a later one, make sure you fully understand the earlier concepts. The most important things to internalize are:

  1. Code in a single async block runs sequentially
  2. Concurrency requires multiple futures being driven together (join, select, spawn_task)
  3. async move transfers ownership into the async block
  4. Channels close when all senders are dropped