Skip to content

Conversation

ApostaC
Copy link
Collaborator

@ApostaC ApostaC commented Dec 26, 2024

Note: This PR is part of the big CPU offloading PR #10874 -- this PR contains the CPU-offloading block allocator implementation as well as the changes in the scheduler.

TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.

End-to-end benchmarking results:

A long document QA workload (see benchmarks/benchmark_long_document_qa.py) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.

image

(Following are the original data for the above figure)

Num documents vLLM vLLM w/ prefix caching vLLM w/ prefix caching + CPU offloading
8 13.66 0.49 0.5
16 27.28 7.22 2.3
32 54.54 49.96 17.26
64 109.27 126.08 110.96

Implementation

This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.

The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.

Here is the flow diagram
image

This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@KuntaiDu KuntaiDu added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 27, 2024
@youkaichao
Copy link
Member

I will hand it over to @comaniac for final reviews.

@comaniac
Copy link
Collaborator

@ApostaC could you take a look at #11385 and see if it's related?

@ApostaC
Copy link
Collaborator Author

ApostaC commented Dec 31, 2024

@ApostaC could you take a look at #11385 and see if it's related?

I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be.

By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)?
@KuntaiDu Please chime in if you have more understanding about this, thanks.

@DearPlanet
Copy link
Contributor

@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error.
It works well on xformers/default attention backend.

@KuntaiDu
Copy link
Collaborator

KuntaiDu commented Jan 5, 2025

@ApostaC could you take a look at #11385 and see if it's related?

I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be.

By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)? @KuntaiDu Please chime in if you have more understanding about this, thanks.

For PR #11385, it is essentially "sending" KV cache to the CPU pool after prefill and "receiving" KV cache from the CPU pool before prefill, so the abstractions exposed by disaggregated prefill can help that PR handle all the control-plane stuff.

@ApostaC
Copy link
Collaborator Author

ApostaC commented Jan 6, 2025

@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error. It works well on xformers/default attention backend.

Hey @DearPlanet , can you give some basic scripts to help reproduce the problem? This will be very helpful to debug.

@DearPlanet
Copy link
Contributor

DearPlanet commented Jan 7, 2025

@ApostaC , here is a simple reproducing process, the commands below executed on RTX3090x2:

Start service:

VLLM_ATTENTION_BACKEND=FLASHINFER CUDA_VISIBLE_DEVICES=0,1 vllm serve /host/models/Qwen2.5-32B-Instruct-AWQ/ --served-model-name qwen2.5-32b --enable-prefix-caching --block-allocator CpuOffloadingBlockAllocator --preemption_mode recomputation --swap-space 25  --tensor-parallel-size 2 --host 0.0.0.0 --port 8080 --gpu-memory-utilization 0.65 --max-model-len 3000

Run benchmark script at vllm/benchmark/ :

python3 benchmark_serving.py --base-url http://0.0.0.0:8080 --dataset-path ./sonnet.txt --model qwen2.5-32b --tokenizer /mnt/root/models/Qwen2.5-32B-Instruct-AWQ/ --request-rate 3 --backend openai-chat --endpoint /v1/chat/completions --dataset-name sonnet

Print output content of responses, then you can see the abnormal decoding results.

I tried with default/xformers/flashinfer backends:
3090x2: default✅xformers✅ flashinfer❌
L20x2(with fp8 kv cache): default❓xformers❌flashinfer❌

The correct output log file:
test_out_sonnet_default.log

The error output log file:
test_out_sonnet_flashinfer.log

@boposki
Copy link

boposki commented Jan 17, 2025

I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten.

@boposki
Copy link

boposki commented Jan 17, 2025

I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten.
sorry, i found the problem is solved。 code: self._uncached_blocks.remove(block_id)

Copy link

mergify bot commented Feb 28, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ApostaC.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 28, 2025
Copy link
Contributor

@maobaolong maobaolong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ApostaC Thanks for this great feature, we will test it in our test env.

BTW, i left a bundle of minor comment related to simplify code.

"handles CPU offloading internally."\
# mark this block as uncached

block = self._allocators[device].allocate_mutable_block(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can call super implemented methods?

block = super().allocate_mutable_block(
            prev_block, device, extra_hash=extra_hash
        )

), "cpu and gpu block allocators can't have intersection of block ids"

super().__init__(cpu_block_allocator, gpu_block_allocator)
self._allocators: Dict[Device,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems defined in super claas, can you use the it from super class?

" handles CPU offloading internally."

# allocate a GPU block
block = self._allocators[device].allocate_immutable_block(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be

super().allocate_immutable_block(.....)

@hmellor
Copy link
Member

hmellor commented Apr 2, 2025

After speaking to @ApostaC I'm closing this as it only optimises the V0 engine. There is work in progress for a similar PR in V1.

@hmellor hmellor closed this Apr 2, 2025
@kyet
Copy link

kyet commented Apr 3, 2025

..snip.. There is work in progress for a similar PR in V1.

Which PR are you referring to? Is it possibly #13377?

@hmellor
Copy link
Member

hmellor commented Apr 3, 2025

I don't think the PR has been created yet, but I believe it's coming from the LMCache team

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend needs-rebase ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants