-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
[Bugfix] Do not pad multi-modal encoder sequence dummy data #15574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Do not pad multi-modal encoder sequence dummy data #15574
Conversation
Signed-off-by: Travis Johnson <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
My focus was on Mllama here. I'm not sure if the padding might be necessary for other models... Let me know if there other models that I should verify working with this change. |
The only other model we need to check is Whisper. |
# For encoder-decoder models, use encoder prompt token ids instead of | ||
# decoder prompt to construct dummy seq_data for encoder profiling. | ||
encoder_prompt_token_ids = mm_inputs["encoder_prompt_token_ids"] | ||
|
||
total_len = len(encoder_prompt_token_ids) | ||
num_tokens_to_pad = max(total_len, seq_len) - total_len | ||
encoder_prompt_token_ids.extend([0] * num_tokens_to_pad) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I add this padding because whisper needs padding the encoder sequence. We need to update whisper's profiler to keep padding for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm starting to look at whisper now; trying out openai/whisper-small
. I added some debug logs to this funtion and it looks like encoder_prompt_token_ids
always comes in with a length of 1500 during the profiling and this padding doesn't usually trigger. Default for max multimodal batched tokens is 5120, so would need max_num_seqs<4 for seq_len to be >1500, but I get CUDA OOB exceptions if I try with max_num_seqs<12 🤔
This pull request has merge conflicts that must be resolved before it can be |
Closing as superseded by #16129 |
Removes padding of dummy encoder data. Instead, expect
get_and_validate_mm_inputs
to provide the max-size dummy sequence.For Mllama, this additional padding would create an encoder input that was longer than expected leading to an AssertionError during profiling unless seq_len was small enough (see #13929).
FIX #13929