@nnethercote reports that the liveness computations do a lot of allocations. Indeed, the liveness results are stored in a `Vec<IdxSetBuf>`: https://github.com/rust-lang/rust/blob/2808460e0f8082ac2724c9809d27990ca9cfefd8/src/librustc_mir/util/liveness.rs#L59 https://github.com/rust-lang/rust/blob/2808460e0f8082ac2724c9809d27990ca9cfefd8/src/librustc_mir/util/liveness.rs#L49 This means that we will allocate a separate bitset for the ins (and outs!) of each basic block. This is rather inefficient. The other `dataflow` implementations use a different setup. They have just one big index set which has the bits for **every basic block** in one allocation. They also avoid allocating both an `ins` and `outs` -- since liveness is a reverse analysis, they would basically have only the single `outs` vector (what is live on exit). The `ins` vector is only used during generation of the liveness results here (as well as some assertions and debug printouts later on): https://github.com/rust-lang/rust/blob/2808460e0f8082ac2724c9809d27990ca9cfefd8/src/librustc_mir/util/liveness.rs#L139-L144 In fact, you don't really need it there -- instead, when you process a block X, you would compute take `outs[X]`, subtract the kills and add the defs, and then take that resulting bit set and propagate it to each predecessor of `X`.