-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Multimodal][Speculative Decoding]Eagle Eagle3 mm support, enablement on qwen2.5vl #22872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Multimodal][Speculative Decoding]Eagle Eagle3 mm support, enablement on qwen2.5vl #22872
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for Eagle and Eagle3 speculative decoding for the Qwen2.5-VL multimodal model. The changes include adding new model files for the Eagle and Eagle3 variants, updating model registries, and modifying tests. I found one issue related to a buggy condition in the weight loading logic for the Eagle model which could lead to incorrect behavior. My feedback addresses this.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
tests/v1/e2e/test_spec_decode.py
Outdated
@@ -183,6 +191,9 @@ def test_eagle_correctness( | |||
|
|||
method, model_name, spec_model_name, tp_size = model_setup | |||
|
|||
if "Qwen2.5-VL" in model_name and attn_backend == "TREE_ATTN": | |||
pytest.skip("TREE ATTN not support Qwen2.5-VL Model yet") | |||
print(f"model_setup={model_setup}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(f"model_setup={model_setup}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @22quinn @morgendave can you help review?
af49ffc
to
9d06a8d
Compare
Signed-off-by: Junhong <[email protected]>
e0fd906
to
8874e16
Compare
cc spec decode experts @zixi-qi @charlotte12l |
This pull request has merge conflicts that must be resolved before it can be |
828fc1e
to
f8af6b8
Compare
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong Liu <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
Signed-off-by: Junhong <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your patience!
Signed-off-by: Junhong <[email protected]>
PTAL at the failing test |
Signed-off-by: Junhong <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Junhong Liu <[email protected]>
skip the test_eagle_correctness, because this flash attention build does not support headdim not being a multiple of 32 |
The online service, deployed with the Qwen2.5-VL-7B model, encountered a corner case that caused the service to crash. The input_ids processed by the propose function in eagle.py generated an image_token_id, resulting in a shape mismatch with mm_embeds. |
|
This corner case should be mitigated by #16229, which was merged recently |
… on qwen2.5vl (vllm-project#22872) Signed-off-by: Junhong <[email protected]> Signed-off-by: Junhong Liu <[email protected]> Co-authored-by: Junhong <[email protected]> Co-authored-by: LJH-LBJ <[email protected]>
… on qwen2.5vl (#22872) Signed-off-by: Junhong <[email protected]> Signed-off-by: Junhong Liu <[email protected]> Co-authored-by: Junhong <[email protected]> Co-authored-by: LJH-LBJ <[email protected]> Signed-off-by: yewentao256 <[email protected]>
Does anyone know why this PR removes torch compile support for llama_eagle3? |
The qwen2.5-vl eagle3 model encounters an error when using @support_torch_compile
|
… on qwen2.5vl (vllm-project#22872) Signed-off-by: Junhong <[email protected]> Signed-off-by: Junhong Liu <[email protected]> Co-authored-by: Junhong <[email protected]> Co-authored-by: LJH-LBJ <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
Purpose
follow #20788 Eagle Eagle3 mm support, enablement on qwen2.5vl
model is https://huggingface.co/Rayzl/qwen2.5-vl-7b-eagle3-sgl
Test Plan
Test Result
(Optional) Documentation Update
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.