Skip to content

Commit 5d8b694

Browse files
committed
docs: More task tutorial
1 parent 39f114d commit 5d8b694

File tree

1 file changed

+176
-49
lines changed

1 file changed

+176
-49
lines changed

doc/tutorial-tasks.md

+176-49
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,16 @@
22

33
# Introduction
44

5-
Rust supports concurrency and parallelism through lightweight tasks.
6-
Rust tasks are significantly cheaper to create than traditional
7-
threads, with a typical 32-bit system able to run hundreds of
8-
thousands simultaneously. Tasks in Rust are what are often referred to
9-
as _green threads_, cooperatively scheduled by the Rust runtime onto a
10-
small number of operating system threads.
5+
The Rust language is designed from the ground up to support pervasive
6+
and safe concurrency through lightweight, memory-isolated tasks and
7+
message passing.
8+
9+
Rust tasks are not the same as traditional threads - they are what are
10+
often referred to as _green threads_, cooperatively scheduled by the
11+
Rust runtime onto a small number of operating system threads. Being
12+
significantly cheaper to create than traditional threads, Rust can
13+
create hundreds of thousands of concurrent tasks on a typical 32-bit
14+
system.
1115

1216
Tasks provide failure isolation and recovery. When an exception occurs
1317
in rust code (either by calling `fail` explicitly or by otherwise performing
@@ -16,11 +20,11 @@ to `catch` an exception as in other languages. Instead tasks may monitor
1620
each other to detect when failure has occurred.
1721

1822
Rust tasks have dynamically sized stacks. When a task is first created
19-
it starts off with a small amount of stack (in the hundreds to
20-
low thousands of bytes, depending on plattform), and more stack is
21-
added as needed. A Rust task will never run off the end of the stack as
22-
is possible in many other languages, but they do have a stack budget,
23-
and if a Rust task exceeds its stack budget then it will fail safely.
23+
it starts off with a small amount of stack (currently in the low
24+
thousands of bytes, depending on platform) and more stack is acquired as
25+
needed. A Rust task will never run off the end of the stack as is
26+
possible in many other languages, but they do have a stack budget, and
27+
if a Rust task exceeds its stack budget then it will fail safely.
2428

2529
Tasks make use of Rust's type system to provide strong memory safety
2630
guarantees, disallowing shared mutable state. Communication between
@@ -32,12 +36,12 @@ explore some typical patterns in concurrent Rust code, and finally
3236
discuss some of the more exotic synchronization types in the standard
3337
library.
3438

35-
# A note about the libraries
39+
## A note about the libraries
3640

3741
While Rust's type system provides the building blocks needed for safe
3842
and efficient tasks, all of the task functionality itself is implemented
3943
in the core and standard libraries, which are still under development
40-
and do not always present a nice programming interface.
44+
and do not always present a consistent interface.
4145

4246
In particular, there are currently two independent modules that provide
4347
a message passing interface to Rust code: `core::comm` and `core::pipes`.
@@ -66,43 +70,96 @@ concurrency at the moment.
6670
[`std::arc`]: std/arc.html
6771
[`std::par`]: std/par.html
6872

69-
# Spawning a task
73+
# Basics
7074

71-
Spawning a task is done using the various spawn functions in the
72-
module `task`. Let's begin with the simplest one, `task::spawn()`:
75+
The programming interface for creating and managing tasks is contained
76+
in the `task` module of the `core` library, making it available to all
77+
Rust code by default. At it's simplest, creating a task is a matter of
78+
calling the `spawn` function, passing a closure to run in the new
79+
task.
7380

7481
~~~~
82+
# use io::println;
7583
use task::spawn;
76-
use io::println;
7784
78-
let some_value = 22;
85+
// Print something profound in a different task using a named function
86+
fn print_message() { println("I am running in a different task!"); }
87+
spawn(print_message);
88+
89+
// Print something more profound in a different task using a lambda expression
90+
spawn( || println("I am also running in a different task!") );
7991
92+
// The canonical way to spawn is using `do` notation
8093
do spawn {
81-
println(~"This executes in the child task.");
82-
println(fmt!("%d", some_value));
94+
println("I too am running in a different task!");
8395
}
8496
~~~~
8597

86-
The argument to `task::spawn()` is a [unique
87-
closure](#unique-closures) of type `fn~()`, meaning that it takes no
88-
arguments and generates no return value. The effect of `task::spawn()`
89-
is to fire up a child task that will execute the closure in parallel
90-
with the creator.
98+
In Rust, there is nothing special about creating tasks - the language
99+
itself doesn't know what a 'task' is. Instead, Rust provides in the
100+
type system all the tools necessary to implement safe concurrency,
101+
_owned types_ in particular, and leaves the dirty work up to the
102+
core library.
103+
104+
The `spawn` function has a very simple type signature: `fn spawn(f:
105+
~fn())`. Because it accepts only owned closures, and owned closures
106+
contained only owned data, `spawn` can safely move the entire closure
107+
and all its associated state into an entirely different task for
108+
execution. Like any closure, the function passed to spawn may capture
109+
an environment that it carries across tasks.
110+
111+
~~~
112+
# use io::println;
113+
# use task::spawn;
114+
# fn generate_task_number() -> int { 0 }
115+
// Generate some state locally
116+
let child_task_number = generate_task_number();
117+
118+
do spawn {
119+
// Capture it in the remote task
120+
println(fmt!("I am child number %d", child_task_number));
121+
}
122+
~~~
123+
124+
By default tasks will be multiplexed across the available cores, running
125+
in parallel, thus on a multicore machine, running the following code
126+
should interleave the output in vaguely random order.
91127

92-
# Communication
128+
~~~
129+
# use io::print;
130+
# use task::spawn;
93131
94-
Now that we have spawned a child task, it would be nice if we could
95-
communicate with it. This is done using *pipes*. Pipes are simply a
96-
pair of endpoints, with one for sending messages and another for
97-
receiving messages. The easiest way to create a pipe is to use
98-
`pipes::stream`. Imagine we wish to perform two expensive
99-
computations in parallel. We might write something like:
132+
for int::range(0, 20) |child_task_number| {
133+
do spawn {
134+
print(fmt!("I am child number %d\n", child_task_number));
135+
}
136+
}
137+
~~~
138+
139+
## Communication
140+
141+
Now that we have spawned a new task, it would be nice if we could
142+
communicate with it. Recall that Rust does not have shared mutable
143+
state, so one task may not manipulate variables owned by another task.
144+
Instead we use *pipes*.
145+
146+
Pipes are simply a pair of endpoints, with one for sending messages
147+
and another for receiving messages. Pipes are low-level communication
148+
building-blocks and so come in a variety of forms, appropriate for
149+
different use cases, but there are just a few varieties that are most
150+
commonly used, which we will cover presently.
151+
152+
The simplest way to create a pipe is to use the `pipes::stream`
153+
function to create a `(Chan, Port)` pair. In Rust parlance a 'channel'
154+
is a sending endpoint of a pipe, and a 'port' is the recieving
155+
endpoint. Consider the following example of performing two calculations
156+
concurrently.
100157

101158
~~~~
102159
use task::spawn;
103160
use pipes::{stream, Port, Chan};
104161
105-
let (chan, port) = stream();
162+
let (chan, port): (Chan<int>, Port<int>) = stream();
106163
107164
do spawn {
108165
let result = some_expensive_computation();
@@ -116,17 +173,19 @@ let result = port.recv();
116173
# fn some_other_expensive_computation() {}
117174
~~~~
118175

119-
Let's walk through this code line-by-line. The first line creates a
120-
stream for sending and receiving integers:
176+
Let's examine this example in detail. The `let` statement first creates a
177+
stream for sending and receiving integers (recall that `let` can be
178+
used for destructuring patterns, in this case separating a tuple into
179+
its component parts).
121180

122-
~~~~ {.ignore}
123-
# use pipes::stream;
124-
let (chan, port) = stream();
181+
~~~~
182+
# use pipes::{stream, Chan, Port};
183+
let (chan, port): (Chan<int>, Port<int>) = stream();
125184
~~~~
126185

127-
This port is where we will receive the message from the child task
128-
once it is complete. The channel will be used by the child to send a
129-
message to the port. The next statement actually spawns the child:
186+
The channel will be used by the child task to send data to the parent task,
187+
which will wait to recieve the data on the port. The next statement
188+
spawns the child task.
130189

131190
~~~~
132191
# use task::{spawn};
@@ -140,14 +199,15 @@ do spawn {
140199
}
141200
~~~~
142201

143-
This child will perform the expensive computation send the result
144-
over the channel. (Under the hood, `chan` was captured by the
145-
closure that forms the body of the child task. This capture is
146-
allowed because channels are sendable.)
202+
Notice that `chan` was transferred to the child task implicitly by
203+
capturing it in the task closure. Both `Chan` and `Port` are sendable
204+
types and may be captured into tasks or otherwise transferred between
205+
them. In the example, the child task performs an expensive computation
206+
then sends the result over the captured channel.
147207

148-
Finally, the parent continues by performing
149-
some other expensive computation and then waiting for the child's result
150-
to arrive on the port:
208+
Finally, the parent continues by performing some other expensive
209+
computation and then waiting for the child's result to arrive on the
210+
port:
151211

152212
~~~~
153213
# use pipes::{stream, Port, Chan};
@@ -158,7 +218,73 @@ some_other_expensive_computation();
158218
let result = port.recv();
159219
~~~~
160220

161-
# Creating a task with a bi-directional communication path
221+
The `Port` and `Chan` pair created by `stream` enable efficient
222+
communication between a single sender and a single receiver, but
223+
multiple senders cannot use a single `Chan`, nor can multiple
224+
receivers use a single `Port`. What if our example needed to
225+
perform multiple computations across a number of tasks? In that
226+
case we might use a `SharedChan`, a type that allows a single
227+
`Chan` to be used by multiple senders.
228+
229+
~~~
230+
# use task::spawn;
231+
use pipes::{stream, SharedChan};
232+
233+
let (chan, port) = stream();
234+
let chan = SharedChan(move chan);
235+
236+
for uint::range(0, 3) |init_val| {
237+
// Create a new channel handle to distribute to the child task
238+
let child_chan = chan.clone();
239+
do spawn {
240+
child_chan.send(some_expensive_computation(init_val));
241+
}
242+
}
243+
244+
let result = port.recv() + port.recv() + port.recv();
245+
# fn some_expensive_computation(_i: uint) -> int { 42 }
246+
~~~
247+
248+
Here we transfer ownership of the channel into a new `SharedChan`
249+
value. Like `Chan`, `SharedChan` is a non-copyable, owned type
250+
(sometimes also referred to as an 'affine' or 'linear' type). Unlike
251+
`Chan` though, `SharedChan` may be duplicated with the `clone()`
252+
method. A cloned `SharedChan` produces a new handle to the same
253+
channel, allowing multiple tasks to send data to a single port.
254+
Between `spawn`, `stream` and `SharedChan` we have enough tools
255+
to implement many useful concurrency patterns.
256+
257+
Note that the above `SharedChan` example is somewhat contrived since
258+
you could also simply use three `stream` pairs, but it serves to
259+
illustrate the point. For reference, written with multiple streams it
260+
might look like the example below.
261+
262+
~~~
263+
# use task::spawn;
264+
# use pipes::{stream, Port, Chan};
265+
266+
let ports = do vec::from_fn(3) |init_val| {
267+
let (chan, port) = stream();
268+
269+
do spawn {
270+
chan.send(some_expensive_computation(init_val));
271+
}
272+
273+
port
274+
};
275+
276+
// Wait on each port, accumulating the results
277+
let result = ports.foldl(0, |accum, port| *accum + port.recv() );
278+
# fn some_expensive_computation(_i: uint) -> int { 42 }
279+
~~~
280+
281+
# Unfinished notes
282+
283+
## Actor patterns
284+
285+
## Linearity, option dancing, owned closures
286+
287+
## Creating a task with a bi-directional communication path
162288

163289
A very common thing to do is to spawn a child task where the parent
164290
and child both need to exchange messages with each other. The
@@ -227,3 +353,4 @@ assert from_child.recv() == ~"0";
227353

228354
The parent task first calls `DuplexStream` to create a pair of bidirectional endpoints. It then uses `task::spawn` to create the child task, which captures one end of the communication channel. As a result, both parent
229355
and child can send and receive data to and from the other.
356+

0 commit comments

Comments
 (0)