@@ -16,12 +16,10 @@ fn return_two_test() {
16
16
}
17
17
~~~
18
18
19
- To run these tests, compile with ` rustc --test ` and run the resulting
20
- binary:
19
+ To run these tests, use ` rustc --test ` :
21
20
22
21
~~~ {.notrust}
23
- $ rustc --test foo.rs
24
- $ ./foo
22
+ $ rustc --test foo.rs; ./foo
25
23
running 1 test
26
24
test return_two_test ... ok
27
25
@@ -49,8 +47,8 @@ value. To run the tests in a crate, it must be compiled with the
49
47
` --test ` flag: ` rustc myprogram.rs --test -o myprogram-tests ` . Running
50
48
the resulting executable will run all the tests in the crate. A test
51
49
is considered successful if its function returns; if the task running
52
- the test fails, through a call to ` fail! ` , a failed ` assert ` , or some
53
- other (` assert_eq ` , ...) means, then the test fails.
50
+ the test fails, through a call to ` fail! ` , a failed ` check ` or
51
+ ` assert ` , or some other (` assert_eq ` , ...) means, then the test fails.
54
52
55
53
When compiling a crate with the ` --test ` flag ` --cfg test ` is also
56
54
implied, so that tests can be conditionally compiled.
@@ -102,63 +100,7 @@ failure output difficult. In these cases you can set the
102
100
` RUST_TEST_TASKS ` environment variable to 1 to make the tests run
103
101
sequentially.
104
102
105
- ## Examples
106
-
107
- ### Typical test run
108
-
109
- ~~~ {.notrust}
110
- $ mytests
111
-
112
- running 30 tests
113
- running driver::tests::mytest1 ... ok
114
- running driver::tests::mytest2 ... ignored
115
- ... snip ...
116
- running driver::tests::mytest30 ... ok
117
-
118
- result: ok. 28 passed; 0 failed; 2 ignored
119
- ~~~
120
-
121
- ### Test run with failures
122
-
123
- ~~~ {.notrust}
124
- $ mytests
125
-
126
- running 30 tests
127
- running driver::tests::mytest1 ... ok
128
- running driver::tests::mytest2 ... ignored
129
- ... snip ...
130
- running driver::tests::mytest30 ... FAILED
131
-
132
- result: FAILED. 27 passed; 1 failed; 2 ignored
133
- ~~~
134
-
135
- ### Running ignored tests
136
-
137
- ~~~ {.notrust}
138
- $ mytests --ignored
139
-
140
- running 2 tests
141
- running driver::tests::mytest2 ... failed
142
- running driver::tests::mytest10 ... ok
143
-
144
- result: FAILED. 1 passed; 1 failed; 0 ignored
145
- ~~~
146
-
147
- ### Running a subset of tests
148
-
149
- ~~~ {.notrust}
150
- $ mytests mytest1
151
-
152
- running 11 tests
153
- running driver::tests::mytest1 ... ok
154
- running driver::tests::mytest10 ... ignored
155
- ... snip ...
156
- running driver::tests::mytest19 ... ok
157
-
158
- result: ok. 11 passed; 0 failed; 1 ignored
159
- ~~~
160
-
161
- # Microbenchmarking
103
+ ## Benchmarking
162
104
163
105
The test runner also understands a simple form of benchmark execution.
164
106
Benchmark functions are marked with the ` #[bench] ` attribute, rather
@@ -169,12 +111,11 @@ component of your testsuite, pass `--bench` to the compiled test
169
111
runner.
170
112
171
113
The type signature of a benchmark function differs from a unit test:
172
- it takes a mutable reference to type
173
- ` extra::test::BenchHarness ` . Inside the benchmark function, any
174
- time-variable or "setup" code should execute first, followed by a call
175
- to ` iter ` on the benchmark harness, passing a closure that contains
176
- the portion of the benchmark you wish to actually measure the
177
- per-iteration speed of.
114
+ it takes a mutable reference to type ` test::BenchHarness ` . Inside the
115
+ benchmark function, any time-variable or "setup" code should execute
116
+ first, followed by a call to ` iter ` on the benchmark harness, passing
117
+ a closure that contains the portion of the benchmark you wish to
118
+ actually measure the per-iteration speed of.
178
119
179
120
For benchmarks relating to processing/generating data, one can set the
180
121
` bytes ` field to the number of bytes consumed/produced in each
@@ -187,16 +128,15 @@ For example:
187
128
~~~
188
129
extern mod extra;
189
130
use std::vec;
190
- use extra::test::BenchHarness;
191
131
192
132
#[bench]
193
- fn bench_sum_1024_ints(b: &mut BenchHarness) {
133
+ fn bench_sum_1024_ints(b: &mut extra::test:: BenchHarness) {
194
134
let v = vec::from_fn(1024, |n| n);
195
135
b.iter(|| {v.iter().fold(0, |old, new| old + *new);} );
196
136
}
197
137
198
138
#[bench]
199
- fn initialise_a_vector(b: &mut BenchHarness) {
139
+ fn initialise_a_vector(b: &mut extra::test:: BenchHarness) {
200
140
b.iter(|| {vec::from_elem(1024, 0u64);} );
201
141
b.bytes = 1024 * 8;
202
142
}
@@ -223,87 +163,73 @@ Advice on writing benchmarks:
223
163
To run benchmarks, pass the ` --bench ` flag to the compiled
224
164
test-runner. Benchmarks are compiled-in but not executed by default.
225
165
166
+ ## Examples
167
+
168
+ ### Typical test run
169
+
226
170
~~~ {.notrust}
227
- $ rustc mytests.rs -O --test
228
- $ mytests --bench
171
+ > mytests
229
172
230
- running 2 tests
231
- test bench_sum_1024_ints ... bench: 709 ns/iter (+/- 82)
232
- test initialise_a_vector ... bench: 424 ns/iter (+/- 99) = 19320 MB/s
173
+ running 30 tests
174
+ running driver::tests::mytest1 ... ok
175
+ running driver::tests::mytest2 ... ignored
176
+ ... snip ...
177
+ running driver::tests::mytest30 ... ok
233
178
234
- test result: ok. 0 passed; 0 failed; 0 ignored; 2 measured
235
- ~~~
179
+ result: ok. 28 passed; 0 failed; 2 ignored
180
+ ~~~ {.notrust}
236
181
237
- ## Benchmarks and the optimizer
182
+ ### Test run with failures
238
183
239
- Benchmarks compiled with optimizations activated can be dramatically
240
- changed by the optimizer so that the benchmark is no longer
241
- benchmarking what one expects. For example, the compiler might
242
- recognize that some calculation has no external effects and remove
243
- it entirely.
184
+ ~~~ {.notrust}
185
+ > mytests
244
186
245
- ~~~
246
- extern mod extra;
247
- use extra::test::BenchHarness;
187
+ running 30 tests
188
+ running driver::tests::mytest1 ... ok
189
+ running driver::tests::mytest2 ... ignored
190
+ ... snip ...
191
+ running driver::tests::mytest30 ... FAILED
248
192
249
- #[bench]
250
- fn bench_xor_1000_ints(bh: &mut BenchHarness) {
251
- bh.iter(|| {
252
- range(0, 1000).fold(0, |old, new| old ^ new);
253
- });
254
- }
193
+ result: FAILED. 27 passed; 1 failed; 2 ignored
255
194
~~~
256
195
257
- gives the following results
196
+ ### Running ignored tests
258
197
259
198
~~~ {.notrust}
260
- running 1 test
261
- test bench_xor_1000_ints ... bench: 0 ns/iter (+/- 0)
262
-
263
- test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
264
- ~~~
199
+ > mytests --ignored
265
200
266
- The benchmarking runner offers two ways to avoid this. Either, the
267
- closure that the ` iter ` method receives can return an arbitrary value
268
- which forces the optimizer to consider the result used and ensures it
269
- cannot remove the computation entirely. This could be done for the
270
- example above by adjusting the ` bh.iter ` call to
201
+ running 2 tests
202
+ running driver::tests::mytest2 ... failed
203
+ running driver::tests::mytest10 ... ok
271
204
272
- ~~~
273
- bh.iter(|| range(0, 1000).fold(0, |old, new| old ^ new))
205
+ result: FAILED. 1 passed; 1 failed; 0 ignored
274
206
~~~
275
207
276
- Or, the other option is to call the generic ` extra::test::black_box `
277
- function, which is an opaque "black box" to the optimizer and so
278
- forces it to consider any argument as used.
208
+ ### Running a subset of tests
279
209
280
- ~~~
281
- use extra::test::black_box
210
+ ~~~ {.notrust}
211
+ > mytests mytest1
282
212
283
- bh.iter(|| {
284
- black_box(range(0, 1000).fold(0, |old, new| old ^ new));
285
- });
286
- ~~~
213
+ running 11 tests
214
+ running driver::tests::mytest1 ... ok
215
+ running driver::tests::mytest10 ... ignored
216
+ ... snip ...
217
+ running driver::tests::mytest19 ... ok
287
218
288
- Neither of these read or modify the value, and are very cheap for
289
- small values. Larger values can be passed indirectly to reduce
290
- overhead (e.g. ` black_box(&huge_struct) ` ).
219
+ result: ok. 11 passed; 0 failed; 1 ignored
220
+ ~~~
291
221
292
- Performing either of the above changes gives the following
293
- benchmarking results
222
+ ### Running benchmarks
294
223
295
224
~~~ {.notrust}
296
- running 1 test
297
- test bench_xor_1000_ints ... bench: 375 ns/iter (+/- 148)
225
+ > mytests --bench
298
226
299
- test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
300
- ~~~
227
+ running 2 tests
228
+ test bench_sum_1024_ints ... bench: 709 ns/iter (+/- 82)
229
+ test initialise_a_vector ... bench: 424 ns/iter (+/- 99) = 19320 MB/s
301
230
302
- However, the optimizer can still modify a testcase in an undesirable
303
- manner even when using either of the above. Benchmarks can be checked
304
- by hand by looking at the output of the compiler using the ` --emit=ir `
305
- (for LLVM IR), ` --emit=asm ` (for assembly) or compiling normally and
306
- using any method for examining object code.
231
+ test result: ok. 0 passed; 0 failed; 0 ignored; 2 measured
232
+ ~~~
307
233
308
234
## Saving and ratcheting metrics
309
235
0 commit comments