-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Core] Block Allocator to support KV cache CPU offloading #11532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Block Allocator to support KV cache CPU offloading #11532
Conversation
… scheduler Signed-off-by: ApostaC <[email protected]> Co-authored-by: KuntaiDu <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]>
I will hand it over to @comaniac for final reviews. |
I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be. By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)? |
@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error. |
For PR #11385, it is essentially "sending" KV cache to the CPU pool after prefill and "receiving" KV cache from the CPU pool before prefill, so the abstractions exposed by disaggregated prefill can help that PR handle all the control-plane stuff. |
Hey @DearPlanet , can you give some basic scripts to help reproduce the problem? This will be very helpful to debug. |
@ApostaC , here is a simple reproducing process, the commands below executed on RTX3090x2: Start service:
Run benchmark script at
Print output content of responses, then you can see the abnormal decoding results. I tried with default/xformers/flashinfer backends: The correct output log file: The error output log file: |
I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten. |
|
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ApostaC Thanks for this great feature, we will test it in our test env.
BTW, i left a bundle of minor comment related to simplify code.
"handles CPU offloading internally."\ | ||
# mark this block as uncached | ||
|
||
block = self._allocators[device].allocate_mutable_block( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can call super implemented methods?
block = super().allocate_mutable_block(
prev_block, device, extra_hash=extra_hash
)
), "cpu and gpu block allocators can't have intersection of block ids" | ||
|
||
super().__init__(cpu_block_allocator, gpu_block_allocator) | ||
self._allocators: Dict[Device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems defined in super claas, can you use the it from super class?
" handles CPU offloading internally." | ||
|
||
# allocate a GPU block | ||
block = self._allocators[device].allocate_immutable_block( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be
super().allocate_immutable_block(.....)
After speaking to @ApostaC I'm closing this as it only optimises the V0 engine. There is work in progress for a similar PR in V1. |
Which PR are you referring to? Is it possibly #13377? |
I don't think the PR has been created yet, but I believe it's coming from the LMCache team |
Note: This PR is part of the big CPU offloading PR #10874 -- this PR contains the CPU-offloading block allocator implementation as well as the changes in the scheduler.
TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.
End-to-end benchmarking results:
A long document QA workload (see
benchmarks/benchmark_long_document_qa.py
) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.(Following are the original data for the above figure)
Implementation
This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.
The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.
Here is the flow diagram

This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.