-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Core] Performance optimization for swap_blocks by cuda kernels #11531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
vllm/worker/worker.py
Outdated
self.blocks_to_swap_out_buffer = torch.zeros((max_num_blocks, 2), | ||
dtype=torch.int64, | ||
device="cpu", | ||
pin_memory=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some systems do not have pin memory (notably, WSL). we need to take care of that. otherwise this PR LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WSL does not support UVA, either. You can use is_pin_memory_available
to determine if this optimization can be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sound good, just pushed a new PR fixing this
Signed-off-by: ApostaC <[email protected]>
Impressive results! I am interested in the benchmark methods of this kernel. I have tried to replicate the results using Also, what does "pages" refer to in the context of vLLM? It seems to me that you are counting on number of "blocks" in |
Hi @ApostaC , I have the same question and is it available to share the benchmark in detail? |
This pull request has merge conflicts that must be resolved before it can be |
After speaking to @ApostaC I'm closing this as it only optimises the V0 engine. There is work in progress for a similar PR in V1. |
This PR is part of the big CPU offloading PR #10874 -- this PR contains the new CUDA kernel implementation for swap_blocks.
Performance benchmark
The numbers are collected on A100-40GB-SXM GPUs
Notes: CUDA graph compatibility
Currently, it pre-allocates a CPU pin-memory tensor for the
blocks_to_swap_in
andblocks_to_swap_out
. It could support CUDA graphs in the future since the address of the pre-allocated buffer won't change.I did not include it in the PR currently, and as a next step, I can create a new PR for CUDA graphs.