-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
[Bugfix] Temporary fix for quantization + CPU offloading #18487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Add ready to trigger full CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving to unblock CI
This PR is now causing V1 test to fail |
Signed-off-by: Chen Zhang <[email protected]>
We are pausing PRs for 24 hrs, will this be enough time to figure out the root cause? |
This pull request has merge conflicts that must be resolved before it can be |
Just fixed the v1 test. Let's try again. FAILED quantization/test_cpu_offload.py::test_cpu_offload_gptq - RuntimeError: Server exited unexpectedly.
FAILED quantization/test_cpu_offload.py::test_cpu_offload_awq - RuntimeError: Server exited unexpectedly.
FAILED quantization/test_cpu_offload.py::test_cpu_offload_compressed_tensors - AssertionError: Results for model='nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t' are not the same. In e60f550 (the main branch commit of #17945) [2025-05-16T16:32:49Z] FAILED quantization/test_cpu_offload.py::test_cpu_offload_gptq - AssertionError: Results for model='Qwen/Qwen2-1.5B-Instruct-GPTQ-Int4' are not the same.
[2025-05-16T16:32:49Z] ref_args=['--quantization', 'gptq'] ref_envs=None
[2025-05-16T16:32:49Z] compare_args=['--quantization', 'gptq', '--cpu-offload-gb', '1'] compare_envs=None
[2025-05-16T16:32:49Z] ref_result={'test': 'seeded_sampling', 'text': " Mike and I'm a", 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=5, total_tokens=10, completion_tokens_details=None, prompt_tokens_details=None)}
[2025-05-16T16:32:49Z] compare_result={'test': 'seeded_sampling', 'text': ' Sarah and this will be', 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=5, total_tokens=10, completion_tokens_details=None, prompt_tokens_details=None)}
[2025-05-16T16:32:49Z] FAILED quantization/test_cpu_offload.py::test_cpu_offload_awq - AssertionError: Results for model='Qwen/Qwen2-1.5B-Instruct-AWQ' are not the same.
[2025-05-16T16:32:49Z] ref_args=['--quantization', 'awq'] ref_envs=None
[2025-05-16T16:32:49Z] compare_args=['--quantization', 'awq', '--cpu-offload-gb', '1'] compare_envs=None
[2025-05-16T16:32:49Z] ref_result={'test': 'single_completion', 'text': ' John and I am a', 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=5, total_tokens=10, completion_tokens_details=None, prompt_tokens_details=None)}
[2025-05-16T16:32:49Z] compare_result={'test': 'single_completion', 'text': ' Kaitlyn and I', 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=5, total_tokens=10, completion_tokens_details=None, prompt_tokens_details=None)}
[2025-05-16T16:32:49Z] FAILED quantization/test_cpu_offload.py::test_cpu_offload_compressed_tensors - AssertionError: Results for model='nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t' are not the same.
[2025-05-16T16:32:49Z] ref_args=[] ref_envs=None
[2025-05-16T16:32:49Z] compare_args=['--cpu-offload-gb', '1'] compare_envs=None
[2025-05-16T16:32:49Z] ref_result={'test': 'single_completion', 'text': ' ... ... . Today I', 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=6, total_tokens=11, completion_tokens_details=None, prompt_tokens_details=None)}
[2025-05-16T16:32:49Z] compare_result={'test': 'single_completion', 'text': ' ... ... .\n I', 'finish_reason': 'length', 'usage': CompletionUsage(completion_tokens=5, prompt_tokens=6, total_tokens=11, completion_tokens_details=None, prompt_tokens_details=None)} Here:
|
close this PR as we choose to revert #17945 |
#17945 breaks the quantization + CPU offloading test. The reason is that #17945 delays the initialization of
GPUModelRunner.InputBatch
fromGPUModelRunner.__init__
toGPUModelRunner.initialize_kv_cache
and triggers some unknown bug in quantization + CPU offloading. This PR provides a temporary fix by moving the initialization of some tensors back toGPUModelRunner.__init__
.I believe the CI failure is not caused by a bug in #17945 because #18298 that only moves the input batch fails on the same test.
pls revert this PR after finding the root cause.
Related: #18425, #18459