Skip to content

Commit 1508b6e

Browse files
committed
Add some doc examples to lib{green,native}
"How do I start in libX" is a common question that I've seen, so I figured putting the examples in as many places as possible is probably a good idea.
1 parent 66b9c35 commit 1508b6e

File tree

3 files changed

+178
-7
lines changed

3 files changed

+178
-7
lines changed

src/doc/guide-runtime.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -236,9 +236,7 @@ extern mod green;
236236
237237
#[start]
238238
fn start(argc: int, argv: **u8) -> int {
239-
green::start(argc, argv, proc() {
240-
main();
241-
})
239+
green::start(argc, argv, main)
242240
}
243241
244242
fn main() {}

src/libgreen/lib.rs

+149-3
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,156 @@
1212
//!
1313
//! This library provides M:N threading for rust programs. Internally this has
1414
//! the implementation of a green scheduler along with context switching and a
15-
//! stack-allocation strategy.
15+
//! stack-allocation strategy. This can be optionally linked in to rust
16+
//! programs in order to provide M:N functionality inside of 1:1 programs.
1617
//!
17-
//! This can be optionally linked in to rust programs in order to provide M:N
18-
//! functionality inside of 1:1 programs.
18+
//! # Architecture
19+
//!
20+
//! An M:N scheduling library implies that there are N OS thread upon which M
21+
//! "green threads" are multiplexed. In other words, a set of green threads are
22+
//! all run inside a pool of OS threads.
23+
//!
24+
//! With this design, you can achieve _concurrency_ by spawning many green
25+
//! threads, and you can achieve _parallelism_ by running the green threads
26+
//! simultaneously on multiple OS threads. Each OS thread is a candidate for
27+
//! being scheduled on a different core (the source of parallelism), and then
28+
//! all of the green threads cooperatively schedule amongst one another (the
29+
//! source of concurrency).
30+
//!
31+
//! ## Schedulers
32+
//!
33+
//! In order to coordinate among green threads, each OS thread is primarily
34+
//! running something which we call a Scheduler. Whenever a reference to a
35+
//! Scheduler is made, it is synonymous to referencing one OS thread. Each
36+
//! scheduler is bound to one and exactly one OS thread, and the thread that it
37+
//! is bound to never changes.
38+
//!
39+
//! Each scheduler is connected to a pool of other schedulers (a `SchedPool`)
40+
//! which is the thread pool term from above. A pool of schedulers all share the
41+
//! work that they create. Furthermore, whenever a green thread is created (also
42+
//! synonymously referred to as a green task), it is associated with a
43+
//! `SchedPool` forevermore. A green thread cannot leave its scheduler pool.
44+
//!
45+
//! Schedulers can have at most one green thread running on them at a time. When
46+
//! a scheduler is asleep on its event loop, there are no green tasks running on
47+
//! the OS thread or the scheduler. The term "context switch" is used for when
48+
//! the running green thread is swapped out, but this simply changes the one
49+
//! green thread which is running on the scheduler.
50+
//!
51+
//! ## Green Threads
52+
//!
53+
//! A green thread can largely be summarized by a stack and a register context.
54+
//! Whenever a green thread is spawned, it allocates a stack, and then prepares
55+
//! a register context for execution. The green task may be executed across
56+
//! multiple OS threads, but it will always use the same stack and it will carry
57+
//! its register context across OS threads.
58+
//!
59+
//! Each green thread is cooperatively scheduled with other green threads.
60+
//! Primarily, this means that there is no pre-emption of a green thread. The
61+
//! major consequence of this design is that a green thread stuck in an infinite
62+
//! loop will prevent all other green threads from running on that particular
63+
//! scheduler.
64+
//!
65+
//! Scheduling events for green threads occur on communication and I/O
66+
//! boundaries. For example, if a green task blocks waiting for a message on a
67+
//! channel some other green thread can now run on the scheduler. This also has
68+
//! the consequence that until a green thread performs any form of scheduling
69+
//! event, it will be running on the same OS thread (unconditionally).
70+
//!
71+
//! ## Work Stealing
72+
//!
73+
//! With a pool of schedulers, a new green task has a number of options when
74+
//! deciding where to run initially. The current implementation uses a concept
75+
//! called work stealing in order to spread out work among schedulers.
76+
//!
77+
//! In a work-stealing model, each scheduler maintains a local queue of tasks to
78+
//! run, and this queue is stolen from by other schedulers. Implementation-wise,
79+
//! work stealing has some hairy parts, but from a user-perspective, work
80+
//! stealing simply implies what with M green threads and N schedulers where
81+
//! M > N it is very likely that all schedulers will be busy executing work.
82+
//!
83+
//! # Considerations when using libgreen
84+
//!
85+
//! An M:N runtime has both pros and cons, and there is no one answer as to
86+
//! whether M:N or 1:1 is appropriate to use. As always, there are many
87+
//! advantages and disadvantages between the two. Regardless of the workload,
88+
//! however, there are some aspects of using green thread which you should be
89+
//! aware of:
90+
//!
91+
//! * The largest concern when using libgreen is interoperating with native
92+
//! code. Care should be taken when calling native code that will block the OS
93+
//! thread as it will prevent further green tasks from being scheduled on the
94+
//! OS thread.
95+
//!
96+
//! * Native code using thread-local-storage should be approached
97+
//! with care. Green threads may migrate among OS threads at any time, so
98+
//! native libraries using thread-local state may not always work.
99+
//!
100+
//! * Native synchronization primitives (e.g. pthread mutexes) will also not
101+
//! work for green threads. The reason for this is because native primitives
102+
//! often operate on a _os thread_ granularity whereas green threads are
103+
//! operating on a more granular unit of work.
104+
//!
105+
//! * A green threading runtime is not fork-safe. If the process forks(), it
106+
//! cannot expect to make reasonable progress by continuing to use green
107+
//! threads.
108+
//!
109+
//! Note that these concerns do not mean that operating with native code is a
110+
//! lost cause. These are simply just concerns which should be considered when
111+
//! invoking native code.
112+
//!
113+
//! # Starting with libgreen
114+
//!
115+
//! ```rust
116+
//! extern mod green;
117+
//!
118+
//! #[start]
119+
//! fn start(argc: int, argv: **u8) -> int { green::start(argc, argv, main) }
120+
//!
121+
//! fn main() {
122+
//! // this code is running in a pool of schedulers
123+
//! }
124+
//! ```
125+
//!
126+
//! # Using a scheduler pool
127+
//!
128+
//! ```rust
129+
//! use std::task::TaskOpts;
130+
//! use green::{SchedPool, PoolConfig};
131+
//! use green::sched::{PinnedTask, TaskFromFriend};
132+
//!
133+
//! let config = PoolConfig::new();
134+
//! let mut pool = SchedPool::new(config);
135+
//!
136+
//! // Spawn tasks into the pool of schedulers
137+
//! pool.spawn(TaskOpts::new(), proc() {
138+
//! // this code is running inside the pool of schedulers
139+
//!
140+
//! spawn(proc() {
141+
//! // this code is also running inside the same scheduler pool
142+
//! });
143+
//! });
144+
//!
145+
//! // Dynamically add a new scheduler to the scheduler pool. This adds another
146+
//! // OS thread that green threads can be multiplexed on to.
147+
//! let mut handle = pool.spawn_sched();
148+
//!
149+
//! // Pin a task to the spawned scheduler
150+
//! let task = pool.task(TaskOpts::new(), proc() { /* ... */ });
151+
//! handle.send(PinnedTask(task));
152+
//!
153+
//! // Schedule a task on this new scheduler
154+
//! let task = pool.task(TaskOpts::new(), proc() { /* ... */ });
155+
//! handle.send(TaskFromFriend(task));
156+
//!
157+
//! // Handles keep schedulers alive, so be sure to drop all handles before
158+
//! // destroying the sched pool
159+
//! drop(handle);
160+
//!
161+
//! // Required to shut down this scheduler pool.
162+
//! // The task will fail if `shutdown` is not called.
163+
//! pool.shutdown();
164+
//! ```
19165
20166
#[crate_id = "green#0.10-pre"];
21167
#[license = "MIT/ASL2"];

src/libnative/lib.rs

+28-1
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,38 @@
88
// option. This file may not be copied, modified, or distributed
99
// except according to those terms.
1010

11-
//! The native runtime crate
11+
//! The native I/O and threading crate
1212
//!
1313
//! This crate contains an implementation of 1:1 scheduling for a "native"
1414
//! runtime. In addition, all I/O provided by this crate is the thread blocking
1515
//! version of I/O.
16+
//!
17+
//! # Starting with libnative
18+
//!
19+
//! ```rust
20+
//! extern mod native;
21+
//!
22+
//! #[start]
23+
//! fn start(argc: int, argv: **u8) -> int { native::start(argc, argv, main) }
24+
//!
25+
//! fn main() {
26+
//! // this code is running on the main OS thread
27+
//! }
28+
//! ```
29+
//!
30+
//! # Force spawning a native task
31+
//!
32+
//! ```rust
33+
//! extern mod native;
34+
//!
35+
//! fn main() {
36+
//! // We're not sure whether this main function is run in 1:1 or M:N mode.
37+
//!
38+
//! native::task::spawn(proc() {
39+
//! // this code is guaranteed to be run on a native thread
40+
//! });
41+
//! }
42+
//! ```
1643
1744
#[crate_id = "native#0.10-pre"];
1845
#[license = "MIT/ASL2"];

0 commit comments

Comments
 (0)