|
78 | 78 | not "better" than the last 0'th element you extracted. This is
|
79 | 79 | especially useful in simulation contexts, where the tree holds all
|
80 | 80 | incoming events, and the "win" condition means the smallest scheduled
|
81 |
| -time. When an event schedule other events for execution, they are |
| 81 | +time. When an event schedules other events for execution, they are |
82 | 82 | scheduled into the future, so they can easily go into the heap. So, a
|
83 | 83 | heap is a good structure for implementing schedulers (this is what I
|
84 | 84 | used for my MIDI sequencer :-).
|
|
91 | 91 |
|
92 | 92 | Heaps are also very useful in big disk sorts. You most probably all
|
93 | 93 | know that a big sort implies producing "runs" (which are pre-sorted
|
94 |
| -sequences, which size is usually related to the amount of CPU memory), |
| 94 | +sequences, whose size is usually related to the amount of CPU memory), |
95 | 95 | followed by a merging passes for these runs, which merging is often
|
96 | 96 | very cleverly organised[1]. It is very important that the initial
|
97 | 97 | sort produces the longest runs possible. Tournaments are a good way
|
98 |
| -to that. If, using all the memory available to hold a tournament, you |
99 |
| -replace and percolate items that happen to fit the current run, you'll |
100 |
| -produce runs which are twice the size of the memory for random input, |
101 |
| -and much better for input fuzzily ordered. |
| 98 | +to achieve that. If, using all the memory available to hold a |
| 99 | +tournament, you replace and percolate items that happen to fit the |
| 100 | +current run, you'll produce runs which are twice the size of the |
| 101 | +memory for random input, and much better for input fuzzily ordered. |
102 | 102 |
|
103 | 103 | Moreover, if you output the 0'th item on disk and get an input which
|
104 | 104 | may not fit in the current tournament (because the value "wins" over
|
|
0 commit comments