-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Spec-Decode] Support piecewise cudagraphs for Eagle head #25109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Spec-Decode] Support piecewise cudagraphs for Eagle head #25109
Conversation
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
…ewise Signed-off-by: Lucas Wilkinson <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked on an updated development branch as well as with the current branch, and it looks like the CUDA graphs aren't actually running for MTP.
I ran like this:
nsys launch --cuda-event-trace=false -t nvtx,cuda --trace-fork-before-exec=true --cuda-graph-trace=node vllm serve meta-llama/Llama-3.1-8B-Instruct --speculative-config '{"method": "eagle3", "model": "yuhuili/EAGLE3-LLaMA3.1-Instruct-8B", "num_speculative_tokens": 4}' --max-model-len 2048 --max-num-seqs 128 --no-enable-prefix-caching --port 8049
in the nsys profile, the base model is running with piecewise graphs but the EAGLE head is not. I also checked with MTP on DSR1 and I observe the same issue there.
I did some light debugging and observed that the dummy run and the forward context both seem to be receiving the correct cudagraph mode, but for some reason it isn't being used.
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Turns out this was unique to llama/deepseek: d78f30e / f79b9a9 |
Signed-off-by: Lucas Wilkinson <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
Found the PR that removed torch compile for llama_eagle3 in the first place. Unclear why it was done. I'm in favor of merging and then monitoring/expanding the tests to cover rare cases as needed. |
Looks like enabling torch compile for llama_eagle3 might not work well with multimodal. I guess we don't have a test for this? |
Agreed |
…ewise Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Multimodal support patch looks good |
…ct#25109) Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Co-authored-by: Benjamin Chislett <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
…ct#25109) Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Co-authored-by: Benjamin Chislett <[email protected]> Signed-off-by: Dhruvil Bhatt <[email protected]>
…ct#25109) Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Co-authored-by: Benjamin Chislett <[email protected]> Signed-off-by: bbartels <[email protected]>
Purpose
Support PIECEWISE cudagraphs with eagle head; in-between fix until #23679 can be refactored and landed. This should get use most of the performance of that with alot less complexity while the gpu model runner is refactored.
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.