Fundamentals of Asynchronous Programming in Rust
January 3, 2026Part 1: Understanding the Problem Async Solves
Before we learn how to use async/await, we need to deeply understand why it exists. What problem does it solve? When would you actually need it?
The Fundamental Problem: Waiting
Programs often need to wait for things:
- Waiting for data to arrive over the network
- Waiting for a file to be read from disk
- Waiting for a user to type something
- Waiting for a database query to return results
- Waiting for a timer to expire
Here's the crucial insight: waiting is not the same as working. When your program waits for a network response, your CPU isn't doing calculations; it's literally doing nothing, just checking "is the data here yet? no? okay, check again..."
A Real-World Analogy: The Restaurant Kitchen
Imagine you're a chef in a restaurant kitchen. You have multiple orders to prepare:
- Order A: Steak (needs 10 minutes to cook)
- Order B: Salad (needs 2 minutes to prepare)
- Order C: Soup (needs 5 minutes to heat)
The inefficient way (synchronous/blocking):
- Start cooking the steak
- Stand there watching the steak for 10 minutes
- Serve the steak
- Start making the salad
- Spend 2 minutes on it
- Serve the salad
- Start heating the soup
- Wait 5 minutes
- Serve the soup
Total time: 17 minutes. But here's the problem: while you were standing there watching the steak cook, you could have been making the salad! You spent most of your time waiting, not working.
The efficient way (asynchronous):
- Put the steak on the grill (it will cook on its own)
- Put the soup on to heat (it will heat on its own)
- While those are cooking, make the salad
- Serve the salad (done in 2 minutes!)
- Check the soup: it's ready! Serve it (done at 5 minutes)
- Check the steak: it's ready! Serve it (done at 10 minutes)
Total time: 10 minutes. Same amount of actual work, but much less waiting.
How This Applies to Programming
In programming terms:
- Synchronous (blocking) = Do one thing, wait for it to finish completely, then do the next thing
- Asynchronous (non-blocking) = Start a task, and while waiting for it, go do other useful work
Why Not Just Use Threads?
You might think: "I already learned about threads in Chapter 16! Can't I just spawn a thread for each task?"
Yes, you can! But threads have costs:
- Memory overhead: Each thread needs its own stack (typically 1-8 MB of memory)
- CPU overhead: Switching between threads takes time
- Complexity: Coordinating threads requires locks, channels, careful thinking about data races
For a web server handling 10,000 simultaneous connections:
- With threads: 10,000 threads × ~2MB each = ~20GB of memory just for stacks!
- With async: One thread can handle all 10,000 connections, using maybe ~100MB total
Async is about efficiency: doing more with less resources.
When to Use Async vs Threads
This is a critical decision point that confuses many people. Here's how to think about it:
Use threads when:
- You have CPU-intensive work (calculations, processing, number crunching)
- You want true parallelism (work happening simultaneously on multiple CPU cores)
- You have a small number of tasks
Use async when:
- You have I/O-intensive work (network requests, file operations, database queries)
- You have many tasks that spend most of their time waiting
- You want to handle thousands of concurrent operations efficiently
The key question: Is my task mostly computing or mostly waiting?
| Task Type | Example | Best Approach |
|---|---|---|
| CPU-bound | Processing images, running simulations | Threads |
| I/O-bound | Web server handling requests | Async |
| Mixed | Download files then process them | Async for downloads, threads for processing |
Part 2: What is a Future?
Before we can understand async and await, we need to understand the concept of a Future.
The Concept: A Promise of a Value
A Future represents a value that doesn't exist yet, but will exist eventually.
Think of it like ordering a package online:
- When you place the order, you get a tracking number (the Future)
- The package doesn't exist in your hands yet
- Eventually, the package will arrive (the Future completes)
- At that point, you can use the package (get the value)
In Rust terms, a Future is something that will eventually produce a value, but might not have that value right now.
The Future Trait : A Peek Under the Hood
When we talk about "futures," we're talking about types that implement the Future trait:
pub trait Future {
type Output;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}
pub enum Poll<T> {
Ready(T), // Done! Here's the value
Pending, // Not ready yet, check back later
}
How It Works
When you .await a future, the runtime repeatedly calls poll:
Poll::Ready(value)→ The future is complete, here's your resultPoll::Pending→ Not done yet; the runtime suspends this task and works on others
The runtime doesn't busy-loop — when a future returns Pending, it registers a "waker" that notifies the runtime when to try again.
What async/await Compiles Into
When you write an async fn, Rust transforms it into a state machine struct that implements Future. Each .await becomes a state where the machine might pause (Pending) or continue (Ready).
You almost never implement Future manually, the compiler generates all this when you use async/await. But knowing this exists helps you understand why futures are lazy (the struct exists but poll hasn't been called) and why .await points are where task-switching can happen.
Futures Are Lazy
This is extremely important to understand: in Rust, Futures are lazy. They don't do anything until you explicitly ask them to make progress.
This is different from some other languages! In JavaScript, when you call an async function, it immediately starts running in the background. In Rust, when you call an async function, you just get back a Future that hasn't done anything yet.
Think of it like this:
- JavaScript async: Ordering food delivery: they start cooking immediately
- Rust async: Getting a recipe card: nothing happens until you actually start cooking
This might seem like extra work, but it gives you more control. You can create futures, store them, combine them, and decide exactly when and how they run.
Part 3: The async and await Keywords
Now let's see how Rust lets you write asynchronous code.
Defining an Async Function
You mark a function as async by putting async before fn:
async fn fetch_data() -> String {
// ... do something that might take a while ...
String::from("Here's your data!")
}
What Does async Actually Do?
When you write async fn, Rust transforms your function. It doesn't return String directly; it returns something that will produce a String eventually.
Conceptually, this:
async fn fetch_data() -> String {
String::from("data")
}
Becomes something like this (simplified):
fn fetch_data() -> impl Future<Output = String> {
// Returns a Future that, when completed, gives you a String
}
The return type impl Future<Output = String> means "some type that implements the Future trait and will eventually produce a String."
The await Keyword
await is how you say "I need the value from this Future, and I'm willing to wait for it."
async fn do_something() {
let data = fetch_data().await;
println!("Got: {}", data);
}
Notice that Rust's await keyword goes after the expression you're awaiting, not before it. That is, it's a postfix keyword. This might differ from what you're used to if you've used async in other languages, but in Rust it makes chains of methods much nicer to work with:
// You can chain methods nicely
let text = fetch_url(url).await.text().await;
What .await Actually Does
The .await does several things:
- Starts executing the Future (remember, Futures are lazy!)
- If the Future isn't ready, yields control so other work can happen
- When the Future completes, gives you the actual value
The Critical Insight: await Is a Yield Point
When you write .await, you're saying: "If this isn't ready, let someone else use the CPU while I wait."
This is what makes async efficient! Instead of blocking the entire thread waiting for data, you're cooperatively sharing the thread with other tasks.
Think back to our chef analogy:
.awaitis like saying "this needs time to cook, I'll go check on other dishes"- When you
.awaita network request, you're saying "this might take time, let other requests be handled while I wait"
Part 4: The Async Runtime: The Missing Piece
Here's something that surprises many Rust newcomers: Rust doesn't include an async runtime in the standard library.
What Is a Runtime?
Remember when I said Futures are lazy? Something needs to actually drive them to completion. That something is called a runtime or executor.
The runtime's job is to:
- Keep track of all your Futures
- Check which ones can make progress
- Run the ones that are ready
- Efficiently wait when nothing can progress
Why Doesn't Rust Include One?
This is a deliberate design decision. Different use cases need different runtimes:
- A web server might need a runtime optimized for thousands of connections
- An embedded system might need a tiny, minimal runtime
- A GUI application might need a runtime that integrates with the event loop
By not picking one, Rust lets you choose the best tool for your specific situation.
The trpl Crate
The Rust Book uses a teaching crate called trpl (short for "The Rust Programming Language") that wraps the popular Tokio runtime and the futures crate to keep things simpler for learning.
To use it, you'd add it to your Cargo.toml:
[dependencies]
trpl = { path = "../trpl" }
Understanding What the Runtime Does
Without a runtime, this code does nothing:
async fn greet() -> String {
String::from("Hello!")
}
fn main() {
let future = greet(); // Creates a Future, but doesn't run it!
// The future just sits here, never executed
// We never get "Hello!"
}
The Future is created but never driven to completion. It's like having a recipe but never cooking it.
block_on: Bridging Sync and Async
block_on is the bridge between synchronous and asynchronous worlds:
fn main() {
trpl::block_on(async {
// Inside here, we can use .await
let greeting = greet().await;
println!("{}", greeting); // Prints "Hello!"
});
}
Here's what block_on does:
- It takes an async block (a Future)
- It sets up a runtime internally
- It blocks the current thread until that Future completes
- It returns whatever the Future produces
Think of it like this: your main function is synchronous (regular Rust). But you want to run async code. block_on says "okay, I'll sit here and drive this async code to completion, blocking until it's done."
Why Can't main Be Async?
You might wonder: why not just make main async?
// This WON'T compile!
async fn main() {
fetch_data().await;
}
The compiler will tell you: main function is not allowed to be async.
The reason is philosophical: something has to run the async code. Async code needs a runtime to drive it. But if main itself were async, what would drive main? You'd have a chicken-and-egg problem.
So main stays synchronous, and we use block_on to enter the async world.
Part 5: How Async Code Actually Executes: The Sequential Truth
This is a crucial concept that trips up many people: code within a single async block executes sequentially between await points.
The Misconception
People sometimes think that once you're in async land, everything magically runs concurrently. This is wrong!
trpl::block_on(async {
println!("Step 1");
do_thing_a().await;
println!("Step 2");
do_thing_b().await;
println!("Step 3");
do_thing_c().await;
println!("Step 4");
});
This code runs exactly in order: Step 1, then A completes, then Step 2, then B completes, then Step 3, then C completes, then Step 4. There's no concurrency here at all!
Sequential Execution: One After Another
When you .await one thing after another, they happen in sequence:
async fn sequential_example() {
println!("Starting first operation...");
let result1 = slow_operation_one().await; // Wait for this to finish
println!("Starting second operation...");
let result2 = slow_operation_two().await; // Then wait for this
println!("Starting third operation...");
let result3 = slow_operation_three().await; // Then wait for this
println!("All done!");
}
If each operation takes 1 second, this takes 3 seconds total. Each .await waits for completion before moving to the next line.
When Sequential Is What You Want
Sequential async is appropriate when:
- The second operation depends on the result of the first
- You need things to happen in a specific order
- The operations must not overlap for correctness
Example: Logging into a service, then fetching your profile:
async fn get_user_profile(username: &str, password: &str) -> Profile {
// Must log in first to get a token
let token = login(username, password).await;
// Can only fetch profile after we have the token
let profile = fetch_profile(&token).await;
profile
}
You must wait for login to complete before you can fetch the profile. Sequential makes sense here.
The Problem: Unnecessary Waiting
But what if operations don't depend on each other?
async fn fetch_dashboard_data() {
// These don't depend on each other!
let weather = fetch_weather().await; // Takes 500ms
let news = fetch_news().await; // Takes 300ms
let stock_prices = fetch_stocks().await; // Takes 400ms
display_dashboard(weather, news, stock_prices);
}
Total time: 1200ms (500 + 300 + 400)
But wait: why are we fetching news only after weather arrives? These are independent! We're wasting time.
Where Does Concurrency Come From Then?
Concurrency only happens when you have multiple futures being driven at the same time. A single chain of awaits is just sequential code that can pause.
The Key Insight
Within an async block: sequential execution, just like regular code.
Between async blocks/futures being joined or selected: concurrent execution.
Await points are where the runtime can switch to other work, but only if there is other work to switch to!
Part 6: Concurrent Execution: Doing Things "At the Same Time"
When operations don't depend on each other, we can run them concurrently: starting them all and waiting for all to complete.
The Concept: Concurrent vs Parallel
These terms are often confused, so let's be precise:
- Concurrent: Multiple tasks making progress over the same time period, possibly by interleaving
- Parallel: Multiple tasks literally executing at the same instant on different CPU cores
An analogy:
- Concurrent: One chef working on three dishes, switching between them as needed
- Parallel: Three chefs each working on one dish simultaneously
Async is primarily about concurrency. You might have only one thread, but it can juggle many tasks by switching between them at await points.
join: Waiting for Multiple Futures Together
The trpl::join function runs multiple Futures concurrently and waits for all of them to complete:
fn main() {
trpl::block_on(async {
let fut1 = async {
trpl::sleep(Duration::from_millis(500)).await;
"first"
};
let fut2 = async {
trpl::sleep(Duration::from_millis(300)).await;
"second"
};
// Run both concurrently, wait for both to complete
let (result1, result2) = trpl::join(fut1, fut2).await;
println!("{}, {}", result1, result2);
});
}
Now what happens:
- Both futures start "at the same time"
- While
fut1is sleeping,fut2can make progress - While both are sleeping, the runtime can do other work
- When BOTH complete, we continue
Total time: ~500ms (the slowest one), not 800ms!
How join Actually Works
join doesn't use multiple threads. Here's what really happens:
- Both Futures are created
- The runtime polls both of them
- When one is waiting (e.g., sleeping), the runtime checks the other
- The runtime efficiently waits when both are blocked
- As each completes, its result is stored
- When all are complete,
joinreturns all results together
It's like our efficient chef: start the steak, start the soup, then check back on both.
Understanding the Return Type of join
join returns a tuple with all the results in order:
async fn example() {
// If these return String and i32 respectively:
let (a, b) = trpl::join(
returns_string(), // a: String
returns_i32(), // b: i32
).await;
}
The join! Macro for More Futures
When you have more than two futures, use the join! macro:
fn main() {
trpl::block_on(async {
let (weather, news, stocks) = trpl::join!(
fetch_weather(),
fetch_news(),
fetch_stocks()
);
display_dashboard(weather, news, stocks);
});
}
When to Use join
Use join when:
- You have multiple independent operations
- You need ALL of them to complete before continuing
- The operations don't depend on each other's results
Examples:
- Fetching data from multiple APIs to combine into one response
- Loading multiple resources for a game level
- Sending notifications through multiple channels (email, SMS, push)
Fairness in join
The trpl::join function is fair: it checks each future equally often, alternating between them, and never lets one race ahead if the other is ready.
This means futures make progress in a predictable, interleaved fashion. But this isn't guaranteed by all runtimes or all joining strategies! Some might let one future "race ahead." The exact behavior depends on the runtime implementation.
Part 7: A Common Mistake: The Message Channel Example
Let's see how sequential vs concurrent execution plays out with a realistic example. Imagine sending messages with delays:
fn main() {
trpl::block_on(async {
let (tx, mut rx) = trpl::channel();
// Send messages with delays
let messages = vec!["hi", "from", "the", "future"];
for msg in messages {
tx.send(msg).unwrap();
trpl::sleep(Duration::from_millis(500)).await;
}
// Receive messages
while let Some(message) = rx.recv().await {
println!("Got: {}", message);
}
});
}
Question: When do the messages get printed?
Answer: They DON'T get printed at 500ms intervals! They all get printed at once, 2 seconds after the program starts.
Why? Because this is ONE async block. The code runs sequentially:
- Send "hi", sleep 500ms
- Send "from", sleep 500ms
- Send "the", sleep 500ms
- Send "future", sleep 500ms
- NOW the while loop starts
- Print all four messages immediately (they're already in the channel)
The sleeps all happened, then the receives all happened. No interleaving!
The Fix: Separate Async Blocks
To get true concurrency, you need separate futures:
fn main() {
trpl::block_on(async {
let (tx, mut rx) = trpl::channel();
// Sending future
let send_future = async {
let messages = vec!["hi", "from", "the", "future"];
for msg in messages {
tx.send(msg).unwrap();
trpl::sleep(Duration::from_millis(500)).await;
}
};
// Receiving future
let receive_future = async {
while let Some(message) = rx.recv().await {
println!("Got: {}", message);
}
};
// Run both concurrently!
trpl::join(send_future, receive_future).await;
});
}
Now the sender and receiver run concurrently. Messages print at 500ms intervals because the receive loop can run while the send loop is sleeping.
Part 8: async move: Taking Ownership
Just like closures, async blocks can either borrow or take ownership of variables from their environment.
The Problem: Borrowing Doesn't Always Work
fn main() {
trpl::block_on(async {
let (tx, mut rx) = trpl::channel();
let send_future = async {
tx.send("hello").unwrap();
// tx is BORROWED here
};
let receive_future = async {
while let Some(msg) = rx.recv().await {
println!("{}", msg);
}
};
trpl::join(send_future, receive_future).await;
// Problem: receive_future never ends because tx is still alive!
});
}
This program hangs forever. Why?
- The channel receiver (
rx.recv()) returnsNoneonly when the channel is closed - The channel closes when all senders are dropped
txis borrowed bysend_future, but the borrow ends whensend_futurecompletes- But
txitself isn't dropped; it's still owned by the outer scope! - So the channel never closes, so
rx.recv()keeps waiting forever
The Solution: async move
fn main() {
trpl::block_on(async {
let (tx, mut rx) = trpl::channel();
let send_future = async move { // <-- move!
tx.send("hello").unwrap();
// tx is MOVED into this block and dropped when it ends
};
let receive_future = async {
while let Some(msg) = rx.recv().await {
println!("{}", msg);
}
};
trpl::join(send_future, receive_future).await;
});
}
Now tx is moved into the async block. When send_future completes, tx is dropped, the channel closes, rx.recv() returns None, and the program exits cleanly.
When to Use async move
Use async move when:
- The async block needs to own the data (not just borrow it)
- The async block might outlive the scope where the data was created
- You need the data to be dropped when the async block ends (like our channel example)
This is exactly analogous to move closures from Chapter 13, just applied to async blocks.
Part 9: Spawning Tasks: True Independence
So far, we've been running futures within a single task. But sometimes you want to spawn a completely independent task that runs on its own.
spawn_task: Fire and Forget (Almost)
fn main() {
trpl::block_on(async {
trpl::spawn_task(async {
for i in 1..10 {
println!("Spawned task: {}", i);
trpl::sleep(Duration::from_millis(500)).await;
}
});
for i in 1..5 {
println!("Main task: {}", i);
trpl::sleep(Duration::from_millis(500)).await;
}
});
}
spawn_task creates a new task that runs independently. It's similar to spawning a thread, but it's managed by the async runtime instead of the OS.
The Catch: Tasks Get Cancelled
There's an important behavior: when block_on finishes (when its async block completes), any spawned tasks are cancelled!
In the example above, the main task counts to 4, then exits. The spawned task only gets to run until that point, even though it was trying to count to 9.
Join Handles: Waiting for Spawned Tasks
If you need to wait for a spawned task, use its join handle:
fn main() {
trpl::block_on(async {
let handle = trpl::spawn_task(async {
for i in 1..10 {
println!("Spawned task: {}", i);
trpl::sleep(Duration::from_millis(500)).await;
}
"spawned task done"
});
for i in 1..5 {
println!("Main task: {}", i);
trpl::sleep(Duration::from_millis(500)).await;
}
// Wait for the spawned task to complete
let result = handle.await.unwrap();
println!("{}", result);
});
}
Now the main task waits for the spawned task before exiting, so all 9 iterations complete.
spawn_task vs join: When to Use Which
| Approach | Use When |
|---|---|
join(a, b) |
You need both results and want to wait for both |
spawn_task |
You want the task to be independent, possibly outliving the current scope |
spawn_task + handle.await |
You want independence but still need to wait eventually |
Part 10: Racing Futures: First One Wins
Sometimes you don't need all results; you just need one result, whichever comes first.
The Concept: Racing
Imagine you need to get some data, and you have two sources:
- A fast but sometimes unavailable server
- A slow but reliable backup server
You could try the fast one, wait, and if it fails, try the slow one. But that wastes time! Better approach: try both simultaneously, use whichever responds first.
select: First Future to Complete Wins
fn main() {
trpl::block_on(async {
let slow = async {
trpl::sleep(Duration::from_secs(5)).await;
"slow finished"
};
let fast = async {
trpl::sleep(Duration::from_secs(1)).await;
"fast finished"
};
let result = trpl::select(slow, fast).await;
// result tells us which one finished first
match result {
trpl::Either::Left(value) => println!("Slow won: {}", value),
trpl::Either::Right(value) => println!("Fast won: {}", value),
}
});
}
trpl::select takes two futures and returns whichever completes first.
The Either Type
The return type is Either: an enum that tells you which future won:
Either::Left(value): the first future completed firstEither::Right(value): the second future completed first
This lets you know which future won, not just the result.
Building Timeouts
One of the most useful applications is timeouts:
async fn fetch_with_timeout(url: &str) -> Result<String, &'static str> {
let fetch = fetch_url(url);
let timeout = trpl::sleep(Duration::from_secs(10));
match trpl::select(fetch, timeout).await {
trpl::Either::Left(result) => Ok(result),
trpl::Either::Right(_) => Err("Request timed out"),
}
}
If fetch_url completes within 10 seconds, we get Either::Left with the result. If the sleep completes first, we get Either::Right and return an error.
What Happens to the "Losing" Future?
When one future wins, the other is dropped and never completes. This means:
- It stops executing
- Any resources it holds are cleaned up
- It never completes
This is usually fine for timeouts (we don't care about finishing a timed-out request), but be aware if your futures have important side effects.
When to Use select
Use select when:
- You want the result from whichever source responds first
- You want to implement timeouts
- You want to cancel an operation if something else happens
- You're implementing "try this, but give up if X happens"
Part 11: Working With Any Number of Futures
So far we've used join with a fixed number of futures. But what if you have a dynamic number?
The Problem
// This works - fixed number
let (a, b, c) = trpl::join!(future1, future2, future3);
// But what about this?
let futures: Vec<SomeFuture> = get_futures();
// How do we join all of them?
join_all: Joining a Collection
fn main() {
trpl::block_on(async {
let urls = vec![
"https://example.com/a",
"https://example.com/b",
"https://example.com/c",
];
// Create a future for each URL
let futures: Vec<_> = urls
.iter()
.map(|url| fetch_url(url))
.collect();
// Wait for all of them
let results: Vec<String> = trpl::join_all(futures).await;
for result in results {
println!("{}", result);
}
});
}
join_all takes any iterator of futures and returns a future that completes when ALL of them complete, giving you a Vec of all the results.
The Complexity: Different Future Types
Here's where things get tricky. What if your futures have different types?
let future1 = async { 42 }; // Future<Output = i32>
let future2 = async { "hello" }; // Future<Output = &str>
// Can't put these in a Vec together!
join_all needs all futures to be the same type. For different types, you need the join! macro (fixed number) or trait objects.
Trait Objects for Heterogeneous Futures
use std::pin::Pin;
use std::future::Future;
fn main() {
trpl::block_on(async {
let futures: Vec<Pin<Box<dyn Future<Output = ()>>>> = vec![
Box::pin(async { println!("first"); }),
Box::pin(async { println!("second"); }),
Box::pin(async {
trpl::sleep(Duration::from_secs(1)).await;
println!("third");
}),
];
trpl::join_all(futures).await;
});
}
The Box::pin wraps each future in a pinned box, and dyn Future<Output = ()> is a trait object that can hold any future with that output type.
This is where Pin becomes practically important: you need it to put futures in collections.
Part 12: Yielding and Fairness
The Starvation Problem
What happens if a future does a lot of work without any await points?
fn main() {
trpl::block_on(async {
let compute_heavy = async {
// No await points!
for i in 0..1_000_000_000 {
// Heavy computation
}
"done computing"
};
let other_work = async {
println!("I never get to run until compute_heavy finishes!");
"other done"
};
trpl::join(compute_heavy, other_work).await;
});
}
Even though we're using join, other_work doesn't get to run until compute_heavy finishes its billion iterations. Why?
Because the runtime can only switch tasks at await points!
Between await points, code runs synchronously. If you don't await, you don't yield control.
The Solution: Explicit Yielding
For CPU-heavy work that needs to play nice with other tasks:
let compute_heavy = async {
for i in 0..1_000_000 {
// Do some work
heavy_computation();
// Periodically yield control
if i % 1000 == 0 {
trpl::yield_now().await;
}
}
};
yield_now() is an async function that immediately completes but gives the runtime a chance to switch to other tasks.
The Key Insight About Yield Points
Every .await is a potential pause point. The runtime looks at each await as an opportunity to:
- Check if other futures can make progress
- Switch to a different task if needed
- Come back later when this future is ready
If you never await, you never give the runtime this opportunity!
Part 13: Handling Errors in Async Code
Async code can fail just like sync code. Let's understand how errors work with async/await.
async Functions That Return Result
Async functions can return Result just like regular functions:
async fn fetch_user(id: u64) -> Result<User, FetchError> {
// Might succeed with a User
// Might fail with a FetchError
}
Using ? in Async Functions
The ? operator works in async functions exactly like in sync functions:
async fn process_user(id: u64) -> Result<String, FetchError> {
let user = fetch_user(id).await?; // If error, return early
let profile = fetch_profile(&user).await?; // If error, return early
Ok(format!("User {} has profile {}", user.name, profile.bio))
}
The ? after .await is applied to the Result that the Future produces, not to the Future itself.
Error Handling with join
Here's something important: join always waits for ALL Futures to complete, even if some fail.
async fn fetch_all() {
let (result1, result2) = trpl::join(
might_fail_1(), // Returns Result<A, Error>
might_fail_2() // Returns Result<B, Error>
).await;
// result1 and result2 are both Results
// We need to handle them individually
match (result1, result2) {
(Ok(a), Ok(b)) => println!("Both succeeded!"),
(Err(e), Ok(_)) => println!("First failed: {:?}", e),
(Ok(_), Err(e)) => println!("Second failed: {:?}", e),
(Err(e1), Err(e2)) => println!("Both failed!"),
}
}
Part 14: Async Blocks
Besides async functions, you can create async blocks: anonymous chunks of async code.
The Syntax
let my_future = async {
// async code here
do_something().await;
42 // The "return value" of this async block
};
An async block creates a Future that you can store in a variable, pass to functions, or await later.
When Async Blocks Are Useful
Capturing local variables:
async fn process(data: Vec<String>) {
let processed = async {
// This block captures `data` from the outer scope
let mut results = Vec::new();
for item in &data {
results.push(transform(item).await);
}
results
};
let results = processed.await;
}
Creating Futures to pass to join or select:
fn main() {
trpl::block_on(async {
let future1 = async {
trpl::sleep(Duration::from_secs(1)).await;
"first"
};
let future2 = async {
trpl::sleep(Duration::from_secs(2)).await;
"second"
};
let (r1, r2) = trpl::join(future1, future2).await;
});
}
Part 15: Streams: Async Iterators
In synchronous Rust, you have iterators that give you values one at a time. In async Rust, you have streams: the async equivalent.
The Concept
An iterator gives you values synchronously:
for item in vec![1, 2, 3] {
process(item);
}
A stream gives you values asynchronously: each value might take time to arrive:
while let Some(item) = stream.next().await {
process(item);
}
Real-World Examples of Streams
Streams are natural for:
- Reading lines from a network connection (each line arrives separately)
- Receiving messages from a chat server
- Processing events from a message queue
- Reading chunks of a file asynchronously
Why Not Just Return a Vec?
You might wonder: why not just await a Vec of all items?
// Option 1: Get everything at once
let all_messages: Vec<Message> = get_all_messages().await;
// Option 2: Stream them one by one
while let Some(message) = message_stream.next().await {
process(message);
}
Streams are better when:
- There are too many items to fit in memory
- Items arrive over time (like a live feed)
- You want to start processing before all items arrive
- The sequence might be infinite
Creating Streams from Iterators
The simplest way to get a stream is from an existing iterator:
use trpl::StreamExt;
fn main() {
trpl::block_on(async {
let data = vec![1, 2, 3, 4, 5];
let mut stream = trpl::stream_from_iter(data);
while let Some(num) = stream.next().await {
println!("Got: {}", num);
}
});
}
The StreamExt Trait
To get the next() method (and other helpful methods), you need StreamExt:
use trpl::StreamExt;
This is similar to how Iterator has methods like map, filter, etc.; StreamExt provides async-aware versions:
use trpl::StreamExt;
fn main() {
trpl::block_on(async {
let numbers = trpl::stream_from_iter(vec![1, 2, 3, 4, 5, 6]);
let mut doubled_evens = numbers
.filter(|n| n % 2 == 0) // Keep even numbers
.map(|n| n * 2); // Double them
while let Some(num) = doubled_evens.next().await {
println!("{}", num); // 4, 8, 12
}
});
}
Streams from Channels
Async channels naturally produce streams:
fn main() {
trpl::block_on(async {
let (tx, rx) = trpl::channel();
// Sender
let sender = async move {
for i in 1..=5 {
tx.send(i).unwrap();
trpl::sleep(Duration::from_millis(100)).await;
}
};
// Receiver - rx is like a stream!
let receiver = async {
let mut rx = rx;
while let Some(num) = rx.recv().await {
println!("Received: {}", num);
}
};
trpl::join(sender, receiver).await;
});
}
The receiver channel (rx) gives you values as they arrive, one at a time: that's a stream!
Merging Streams
What if you have multiple streams and want to process items from any of them as they arrive?
use trpl::StreamExt;
fn main() {
trpl::block_on(async {
let stream1 = trpl::stream_from_iter(vec![1, 2, 3]);
let stream2 = trpl::stream_from_iter(vec![10, 20, 30]);
let mut merged = stream1.merge(stream2);
while let Some(num) = merged.next().await {
println!("Got: {}", num);
}
});
}
Items from both streams are interleaved as they become available.
Part 16: Pin and Unpin: Making Self-Referential Futures Safe
This is the trickiest part of async Rust. Let me explain it step by step.
Why Pin Exists: Self-Referential Structs
When Rust compiles an async block, it creates a struct (a state machine) that holds all the local variables needed to resume execution. Consider:
async fn example() {
let data = String::from("hello");
let slice = &data[0..2]; // slice references data
some_async_thing().await;
println!("{}", slice); // use slice after await
}
The compiler creates something like:
struct ExampleFuture {
data: String,
slice: /* reference to data */, // Points INTO this same struct!
state: State,
}
This is a self-referential struct: slice points to data which is in the same struct.
The Problem With Moving
Normally in Rust, you can move values freely. But what happens if we move this struct?
Before move: After move:
┌─────────────┐ ┌─────────────┐
│ data: "hi" │◄─┐ │ data: "hi" │
│ slice: ─────┼──┘ │ slice: ─────┼──┐
└─────────────┘ └─────────────┘ │
Address: 100 Address: 200 │
│
┌─────────────┘
▼
Address: 100 ← INVALID!
After the move, slice still points to address 100, but data is now at address 200. slice is a dangling pointer!
Pin to the Rescue
Pin is a wrapper that prevents moving. Once a value is pinned, it cannot be moved to a different memory address.
use std::pin::Pin;
let future = async { /* ... */ };
let pinned: Pin<Box<...>> = Box::pin(future);
// The future inside cannot be moved anymore
// Its internal references are safe
The Unpin Trait
Here's the twist: most types in Rust don't have self-references. For them, moving is perfectly safe. These types implement Unpin.
String,Vec,i32, etc.: allUnpin- Most async futures:
!Unpin(NOT Unpin)
If a type is Unpin, pinning it has no effect: you can still move it. Pin only restricts movement for !Unpin types.
Why You Usually Don't Need to Think About This
The good news: most of the time, you don't need to worry about pinning!
The async/await machinery handles it for you. When you write:
my_future.await
Rust takes care of pinning internally.
When You DO Encounter Pin
You might see Pin in:
- Advanced async patterns
- When storing Futures in structs
- Some library APIs that require
Pin<&mut Future>
Scenario 1: Storing futures in a collection
// Won't compile without Pin!
let futures: Vec<Pin<Box<dyn Future<Output = ()>>>> = vec![
Box::pin(async { /* ... */ }),
Box::pin(async { /* ... */ }),
];
trpl::join_all(futures).await;
Scenario 2: Returning futures from functions
fn make_future() -> Pin<Box<dyn Future<Output = i32>>> {
Box::pin(async {
42
})
}
The Practical Takeaway
Most of the time:
.awaithandles pinning for you. Don't worry about it.When you need dynamic futures: Use
Box::pin()to createPin<Box<dyn Future<...>>>If you see Pin errors: The compiler is telling you a future might be moved unsafely.
Box::pin()usually fixes it.Advanced cases: If you're implementing
Futuremanually or doing complex async patterns, you'll need to understand Pin deeply. But that's beyond beginner scope.
Part 17: Futures, Tasks, and Threads: The Full Picture
Let's understand how all these concepts relate to each other.
The Hierarchy
┌─────────────────────────────────────────┐
│ THREADS │
│ (OS-managed, true parallelism) │
│ │
│ ┌─────────────────────────────────┐ │
│ │ TASKS │ │
│ │ (Runtime-managed, concurrent) │ │
│ │ │ │
│ │ ┌─────────────────────────┐ │ │
│ │ │ FUTURES │ │ │
│ │ │ (Individual async ops) │ │ │
│ │ └─────────────────────────┘ │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────┘
- Futures: The smallest unit. A single async operation.
- Tasks: A runtime concept. A task drives one or more futures.
- Threads: OS-level parallelism. A runtime might use multiple threads to run tasks in parallel.
How They Work Together
A typical async runtime:
- Has a thread pool (say, 4 threads for 4 CPU cores)
- Manages many tasks (could be thousands)
- Each task contains futures
- Tasks are distributed across threads
- When a future awaits, the runtime can switch to another task on that thread
When to Use Each
Use futures (within a single task) when:
- Operations are closely related
- You need to coordinate them (join, select)
- Simpler code structure
Use tasks (spawn_task) when:
- Operations are independent
- One shouldn't block the other
- Different lifetimes
- You want true concurrency without join
Use threads when:
- CPU-bound work
- Need to isolate blocking operations
- True parallelism is required
Combining Them
Real programs often combine all three:
use std::thread;
fn main() {
// Spawn a thread for CPU-heavy work
let compute_handle = thread::spawn(|| {
expensive_computation()
});
// Async runtime for I/O work
trpl::block_on(async {
// Spawn tasks for independent async work
let task1 = trpl::spawn_task(async {
fetch_from_api_a().await
});
let task2 = trpl::spawn_task(async {
fetch_from_api_b().await
});
// Join futures within this task
let (result1, result2) = trpl::join(
task1,
task2
).await;
// Wait for the CPU thread
let compute_result = compute_handle.join().unwrap();
combine_results(result1.unwrap(), result2.unwrap(), compute_result)
});
}
The Key Insight
Async doesn't replace threads; they complement each other.
- Async excels at: Many I/O operations, efficient resource usage
- Threads excel at: CPU parallelism, blocking operations
- Use both when your program has both needs
Part 18: When NOT to Use Async
Async isn't always the answer. Let's be clear about when to avoid it.
CPU-Bound Work
Async doesn't help with CPU-bound work. If you're doing calculations, you're not waiting; you're working. Async is about efficient waiting.
// DON'T do this
async fn calculate_fibonacci(n: u64) -> u64 {
// No await points here!
// This is pure CPU work
// Async adds overhead with no benefit
fib(n)
}
For CPU-bound work, use threads or keep it synchronous.
Simple, One-Off Operations
If you're just doing one thing and waiting for it, async adds complexity without benefit:
// Overkill for a simple script
fn main() {
trpl::block_on(async {
let contents = async_read_file("file.txt").await;
println!("{}", contents);
});
}
// Simpler, just use sync
fn main() {
let contents = std::fs::read_to_string("file.txt").unwrap();
println!("{}", contents);
}
When Simplicity Wins
Async code is more complex than sync code. It has:
- More concepts to understand
- More ways to make mistakes
- Harder debugging
If you're not handling many concurrent operations, sync code is simpler and that simplicity has value.
Tokio in the Real World (some extra)
The trpl Crate vs Real-World Code
The Rust Book uses trpl to simplify learning. But in production code, you'll typically use Tokio directly.
Here's the relationship:
In the Book (trpl) |
In Real Code (Tokio) |
|---|---|
trpl::block_on(async { ... }) |
#[tokio::main] attribute |
trpl::spawn_task() |
tokio::spawn() |
trpl::sleep() |
tokio::time::sleep() |
trpl::join() |
tokio::join! macro |
trpl::select() |
tokio::select! macro |
trpl::channel() |
tokio::sync::mpsc::channel() |
Setting Up Tokio
In your Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["full"] }
The #[tokio::main] Attribute
Instead of block_on, Tokio provides a convenient attribute:
// What you write:
#[tokio::main]
async fn main() {
let data = fetch_data().await;
println!("{}", data);
}
// What the compiler transforms it into (roughly):
fn main() {
tokio::runtime::Runtime::new()
.unwrap()
.block_on(async {
let data = fetch_data().await;
println!("{}", data);
})
}
The #[tokio::main] attribute is just syntactic sugar that:
- Creates a Tokio runtime
- Calls
block_onwith your async main function - Handles all the boilerplate for you
A Real Tokio Example
use tokio::time::{sleep, Duration};
#[tokio::main]
async fn main() {
let task1 = tokio::spawn(async {
sleep(Duration::from_secs(1)).await;
println!("Task 1 done");
42
});
let task2 = tokio::spawn(async {
sleep(Duration::from_millis(500)).await;
println!("Task 2 done");
"hello"
});
// Wait for both
let (result1, result2) = tokio::join!(task1, task2);
println!("Results: {:?}, {:?}", result1.unwrap(), result2.unwrap());
}
Why the Book Uses trpl Instead
The trpl crate:
- Provides simpler, more consistent APIs for learning
- Hides some of Tokio's complexity (like runtime configuration)
- Re-exports things from multiple crates (
tokio,futures) in one place - Keeps you focused on async concepts rather than library-specific details
Once you understand async with trpl, switching to Tokio is straightforward — the concepts are identical, just the function names differ slightly.
Summary: The Complete Mental Model
Let me tie everything together:
Futures are lazy recipes for producing values. They do nothing until polled.
Async blocks create futures. Code inside runs sequentially between await points.
.await polls a future and yields control if not ready. It's the cooperation point.
Runtimes (trpl::block_on) drive futures by polling them repeatedly until done.
join runs multiple futures concurrently, completing when ALL finish.
select runs multiple futures concurrently, completing when ANY ONE finishes (returning Either).
spawn_task creates an independent task that runs separately from the current task.
join_all handles dynamic numbers of futures (needs Pin<Box<...>> for trait objects).
async move takes ownership of captured variables (important for channels and lifetimes).
Streams are async iterators: values arrive over time, processed with while let loops.
Pin prevents futures from moving in memory, making self-referential structs safe.
Tasks, futures, and threads are complementary: futures for individual operations, tasks for independent concurrent work, threads for CPU parallelism.
The key insight: Async is about efficient waiting. When your program spends more time waiting than working, async lets you do more with less resources. But concurrency only happens when you explicitly create multiple futures and join/select them; a single chain of awaits is still sequential!
This chapter is foundational for participating in the Rust ecosystem. Once you're comfortable with these concepts, you'll be able to share your work with the world and manage larger, more complex projects.
Async and Await Exercises
Here are exercises to build your async muscle memory. They progress from basic concepts to more complex patterns.
Exercise 1: Your First Async Function
Create an async function called greet that takes a name: &str parameter and returns a String with the greeting "Hello, {name}!".
In main, use trpl::block_on to run an async block that:
- Calls
greetwith your name - Prints the result
This exercise just gets you comfortable with the basic syntax.
Exercise 2: Sequential Awaits
Create three async functions:
step_one()→ prints "Step 1 starting", sleeps for 500ms, prints "Step 1 done", returns1step_two()→ prints "Step 2 starting", sleeps for 300ms, prints "Step 2 done", returns2step_three()→ prints "Step 3 starting", sleeps for 200ms, prints "Step 3 done", returns3
In your async main block, call all three sequentially (one after another) and print the sum of their results.
Question to answer: How long does the total execution take? Why?
Hint: Use trpl::sleep(Duration::from_millis(500)) for sleeping.
Exercise 3: Concurrent with join
Using the same three functions from Exercise 2, modify your main to run all three concurrently using trpl::join!.
Print the sum of results just like before.
Question to answer: How long does the total execution take now? Why is it different from Exercise 2?
Exercise 4: Understanding Sequential Within Async Blocks
This code has a bug in its logic. The programmer expects messages to print at 200ms intervals, but they all print at once at the end:
fn main() {
trpl::block_on(async {
let messages = vec!["First", "Second", "Third", "Fourth"];
for msg in messages {
trpl::sleep(Duration::from_millis(200)).await;
println!("{}", msg);
}
println!("---");
println!("Now reading messages back:");
// Programmer expected interleaved output but got all at once
});
}
Wait: this code actually DOES print at intervals! The bug description was wrong.
Let me give you the REAL buggy scenario. Here's code where the programmer wants sending and receiving to happen concurrently, but it doesn't:
fn main() {
trpl::block_on(async {
let (tx, mut rx) = trpl::channel();
let messages = vec!["First", "Second", "Third", "Fourth"];
for msg in messages {
tx.send(msg).unwrap();
trpl::sleep(Duration::from_millis(200)).await;
}
while let Some(msg) = rx.recv().await {
println!("Received: {}", msg);
}
});
}
- Explain why all messages are received at once (after all sends complete) rather than interleaved
- Fix the code so sending and receiving happen concurrently (messages print as they're sent)
- Make sure the program terminates properly (hint: you'll need
async move)
Exercise 5: Racing with select
Create an async function slow_server() that sleeps for 3 seconds then returns "Response from slow server".
Create an async function fast_server() that sleeps for 1 second then returns "Response from fast server".
In your main:
- Race both servers using
trpl::select - Print which server responded and what the response was
- Use the
Eithertype to handle both cases
Expected output: The fast server should win, and you should see output after ~1 second, not 3.
Exercise 6: Implementing a Timeout
Create an async function unreliable_fetch() that sleeps for a random-ish duration (use 4 seconds to simulate a slow response) and returns "Data retrieved!".
Write a function fetch_with_timeout that:
- Takes a duration as the timeout limit
- Races
unreliable_fetch()againsttrpl::sleep(timeout) - Returns
Ok(String)if the fetch completes in time - Returns
Err("Timeout!")if the timeout wins
Test it with a 2-second timeout (should fail) and a 5-second timeout (should succeed).
Exercise 7: spawn_task and Join Handles
Create a program that:
- Spawns a task that counts from 1 to 10, printing each number with a 300ms delay
- In the main task, counts from 1 to 5, printing each number with a 500ms delay
- Waits for the spawned task to complete before exiting
You should see interleaved output from both counters. The spawned task should complete all 10 counts (not get cut off).
Hint: Save the join handle and .await it at the end.
Exercise 8: Multiple Producers
Create a channel and spawn three separate tasks that each send messages:
- Task 1: sends "A1", "A2", "A3" with 100ms delays
- Task 2: sends "B1", "B2", "B3" with 150ms delays
- Task 3: sends "C1", "C2", "C3" with 200ms delays
In the main task, receive and print all messages as they arrive.
Make sure the program terminates properly (all senders must be dropped).
Hint: You'll need to clone the sender for each task, and use async move blocks.
Exercise 9: Working with Streams
Create a stream from the iterator 1..=10.
Use stream combinators to:
- Filter to keep only numbers divisible by 3
- Map each number to its square
Consume the stream with a while let loop, printing each value.
Expected output: 9, 36, 81 (which are 3², 6², 9²)
Hint: You'll need use trpl::StreamExt;
Exercise 10: Dynamic Futures with join_all
Write a function fetch_page(id: u32) that:
- Sleeps for
(id * 100)milliseconds (so page 1 takes 100ms, page 2 takes 200ms, etc.) - Returns a string like
"Content of page {id}"
In main:
- Create a
Vecof page IDs:vec![1, 2, 3, 4, 5] - Map over this vec to create a vec of futures
- Use
trpl::join_allto await all of them concurrently - Print all the results
Question to answer: If run sequentially, this would take 100+200+300+400+500 = 1500ms. How long does it take with join_all? Why?
Exercise 11: The Starvation Problem
This code has a starvation problem:
fn main() {
trpl::block_on(async {
let compute = async {
let mut sum: u64 = 0;
for i in 0..1_000_000 {
sum += i;
}
sum
};
let printer = async {
for i in 1..=5 {
println!("Printer: {}", i);
trpl::sleep(Duration::from_millis(10)).await;
}
};
let (result, _) = trpl::join(compute, printer).await;
println!("Sum: {}", result);
});
}
- Run this code and observe: does the printer output interleave with the computation, or does all printing happen after the sum is computed?
- Explain why this happens
- Fix the
computefuture so thatprintergets chances to run during the computation
Hint: Use trpl::yield_now() periodically.
Exercise 12: Combining Async and Threads
Create a program that:
- Spawns an OS thread (using
std::thread::spawn) that does CPU-heavy work: compute the sum of 1 to 10_000_000 - Meanwhile, in an async runtime, fetch "data" from three simulated async sources concurrently (each just sleeps for 500ms and returns a string)
- Wait for both the thread and the async operations to complete
- Print all results
This exercises the pattern of using threads for CPU work and async for I/O work.
Hint:
- The thread returns a
JoinHandleyou can.join()on - Do your async work inside
trpl::block_on - You'll need to coordinate getting the thread result after the async block
Exercise 13: Building an Async Timeout Wrapper (Challenge)
Create a generic timeout wrapper function:
async fn with_timeout<T>(
future: impl Future<Output = T>,
timeout_ms: u64
) -> Result<T, &'static str>
This function should:
- Run the given future
- If it completes within
timeout_msmilliseconds, returnOk(result) - If the timeout expires first, return
Err("Operation timed out")
Test it with:
- A fast operation (100ms) with a 500ms timeout → should succeed
- A slow operation (1000ms) with a 200ms timeout → should fail
Hint: Use trpl::select internally.
Exercise 14: Message Processor with Graceful Shutdown (Challenge)
Build a message processing system:
- Create a channel for sending "jobs" (just strings)
- Spawn a worker task that:
- Receives jobs from the channel
- "Processes" each job by sleeping 100ms and printing "Processed: {job}"
- Exits cleanly when the channel closes
- In the main task:
- Send 5 jobs: "job1" through "job5"
- After sending all jobs, close the channel (drop the sender)
- Wait for the worker to finish
- Print "All jobs completed!"
The key learning: how channel closure signals the worker to exit.
Good luck! These exercises build on each other, so if you get stuck on a later one, make sure you fully understand the earlier concepts. The most important things to internalize are:
- Code in a single async block runs sequentially
- Concurrency requires multiple futures being driven together (join, select, spawn_task)
async movetransfers ownership into the async block- Channels close when all senders are dropped