Skip to content

Conversation

Isotr0py
Copy link
Member

@Isotr0py Isotr0py commented Apr 6, 2025

FIX #13929

  • This PR makes multimodal dummy encoder sequence padding alternative, because we still need padding to keep whisper working.
  • Add missing warnings messages for encoder dummy data

Copy link

github-actions bot commented Apr 6, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Apr 6, 2025
@DarkLight1337
Copy link
Member

Can we add a test to avoid similar regressions? Preferably using Whisper so it can run in CI without OOM

@DarkLight1337
Copy link
Member

cc @tjohnson31415 as well

@Isotr0py
Copy link
Member Author

Isotr0py commented Apr 6, 2025

Can we add a test to avoid similar regressions? Preferably using Whisper so it can run in CI without OOM

Hmmm, the original issue is Mllama specific, because the Mllama prefill/decode implementation is a bit hacky. Let me see if we can add a processor test to simulate this issue rather than loading the full model...

Signed-off-by: Isotr0py <[email protected]>
Comment on lines +12 to +16
@pytest.mark.parametrize("model_id",
["meta-llama/Llama-3.2-11B-Vision-Instruct"])
@pytest.mark.parametrize("max_model_len", [4096, 8192, 25600, 131072])
@pytest.mark.parametrize("max_num_seqs", [1, 2, 8])
def test_profiling(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has confirmed this test is failing on main branch.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) April 7, 2025 02:08
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 7, 2025
@DarkLight1337 DarkLight1337 merged commit fc0f877 into vllm-project:main Apr 7, 2025
61 checks passed
@Isotr0py Isotr0py deleted the missing-warnings branch April 7, 2025 05:21
# Encoder-decoder multimodal models only support v0
if total_len > seq_len:
# `max_num_batched_tokens` is defined by `SchedulerConfig`
logger.warning(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this warning should only get printed once. As it it, it is just too noisy

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like #16193 probably

lengrongfu pushed a commit to lengrongfu/vllm that referenced this pull request Apr 7, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: mllama AssertionError during kv cache profiling

3 participants