Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Aug 16, 2025

Purpose

Currently both P0 and P1 store multi-modal processor outputs. This PR makes it so that only one process needs to store the processor outputs, halving the memory usage overall.

  • If IPC is enabled, P0 uses MultiModalProcessorSenderCache (only stores hashes and metadata of processor outputs) while P1 uses MultiModalReceiverCache (stores both hashes and processor outputs).
  • If IPC is disabled, P0 uses MultiModalProcessorOnlyCache (stores both hashes and processor outputs) while P1 uses no caching.

Key changes:

  • Defined new cache interfaces inside vllm.multimodal.cache. The old definitions in v1.engine.mm_input_cache have been removed.
  • Moved processor cache and corresponding clear cache method from MultiModalRegistry and Processor into InputPreprocessor class.
  • P0 cache update is now performed inside BaseMultiModalProcessor instead of Processor class.
  • Processor cache is now required to be explicitly created in model runner to perform profiling.

Test Plan

Added simple tests to check the interface of BaseMultiModalCache.

Test Result

The new tests pass.

(Optional) Documentation Update

Updated docs/configuration/optimization.md.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 moved this to In Progress in Multi-modality Core Aug 16, 2025
@DarkLight1337 DarkLight1337 added the multi-modality Related to multi-modality (#4194) label Aug 16, 2025
@mergify mergify bot added llama Related to Llama models v1 tpu Related to Google TPUs labels Aug 16, 2025
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant refactoring of the multimodal input caching mechanism. The core change is the introduction of the CachedMultiModalInputExchanger abstraction, which separates caching logic for the frontend (P0) and the core engine (P1). This enables a "key-only" cache in P0, reducing its memory footprint. The changes are extensive, touching on core engine logic, model implementations, and tests. While the overall refactoring appears to be a solid improvement, I have identified two critical bugs in the implementation that could lead to runtime errors and incorrect caching behavior. These issues need to be addressed to ensure the stability and correctness of the new caching system.

Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 changed the title [Core] Use key-only cache in P0 [Core] Use key-only cache for BaseMultiModalProcessor in P0 Aug 16, 2025
@DarkLight1337 DarkLight1337 changed the title [Core] Use key-only cache for BaseMultiModalProcessor in P0 [Core] Use key-only cache for BaseMultiModalProcessor Aug 16, 2025
@mergify mergify bot added the documentation Improvements or additions to documentation label Aug 16, 2025
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 moved this from In Progress to Blocked in Multi-modality Core Aug 16, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 moved this from Blocked to Pinned in Multi-modality Core Aug 16, 2025
@DarkLight1337 DarkLight1337 moved this from Pinned to In Progress in Multi-modality Core Aug 16, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@mergify
Copy link

mergify bot commented Aug 26, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @DarkLight1337.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 26, 2025
@mergify mergify bot removed the needs-rebase label Aug 26, 2025
@DarkLight1337 DarkLight1337 merged commit 69244e6 into vllm-project:main Aug 27, 2025
40 checks passed
@DarkLight1337 DarkLight1337 deleted the mm-cache-interface branch August 27, 2025 06:19
@github-project-automation github-project-automation bot moved this from In Progress to Done in Multi-modality Core Aug 27, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek Related to DeepSeek models documentation Improvements or additions to documentation llama Related to Llama models multi-modality Related to multi-modality (#4194) qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed tpu Related to Google TPUs v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

5 participants