Skip to content

Conversation

therealnaveenkamal
Copy link
Contributor

@therealnaveenkamal therealnaveenkamal commented Sep 17, 2025

Purpose

This PR implements the first step of #24620 by separating Multi-Head Latent Attention into its own dedicated AttentionLayerBase subclass.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@therealnaveenkamal therealnaveenkamal changed the title Separate MLAAttention class from, Attention (needs Review) Separate MLAAttention class from Attention (needs Review) Sep 17, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Multi-Head Latent Attention (MLA) logic out of the generic Attention class and into a new, dedicated MLAAttention class. This is a good step towards better code organization and separation of concerns. The changes in vllm/attention/layer.py and vllm/model_executor/layers/mla.py correctly remove the old MLA logic and adopt the new class. However, the new MLAAttention class in vllm/model_executor/layers/mla_attention.py has critical implementation issues. It fails to properly instantiate and call the attention backend, and it lacks the necessary integration with the KV cache and attention metadata management. These issues will prevent the MLA feature from functioning. I've left detailed comments on how to address these critical problems.

Copy link
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor notes

@mergify mergify bot added the deepseek Related to DeepSeek models label Sep 19, 2025
@therealnaveenkamal
Copy link
Contributor Author

@ProExpertProg i'm working on unified_mla_attention ops - how do you want it to be? any inputs would be helpful.

@ProExpertProg
Copy link
Collaborator

Yeah to start they can just mimic the unified_attention and unified_attention_with_output ops. Also please keep the existing MLAAttentionWrapper as is and make the new MLAAttention layer the same in scope as Attention (no rope, no o_proj, etc.)

@therealnaveenkamal
Copy link
Contributor Author

Hi @ProExpertProg, thanks for the feedback.

I've added the unified_mla_attention and unified_mla_attention_with_output ops, which mimic the existing unified attention ops.

MLAAttention layer has been created in mla.py...scoped similarly to the base Attention layer and does not handle projections or rotary embeddings.

The MultiHeadLatentAttentionWrapper uses the new MLAAttention layer to handle the core attention logic.

Let me know what you think. Thanks

Copy link

mergify bot commented Sep 23, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @therealnaveenkamal.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 23, 2025
@mergify mergify bot removed the needs-rebase label Sep 24, 2025
@therealnaveenkamal therealnaveenkamal changed the title Separate MLAAttention class from Attention (needs Review) Separate MLAAttention class from Attention Sep 24, 2025
@therealnaveenkamal
Copy link
Contributor Author

@ProExpertProg i've resolved all the comments. please let me know if i have to make any changes

@ProExpertProg
Copy link
Collaborator

Can you fix pre commit please

@ProExpertProg ProExpertProg enabled auto-merge (squash) October 7, 2025 21:54
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 7, 2025
Signed-off-by: Naveenraj Kamalakannan <[email protected]>
auto-merge was automatically disabled October 7, 2025 22:03

Head branch was pushed to by a user without write access

@ProExpertProg ProExpertProg enabled auto-merge (squash) October 7, 2025 22:24
@ProExpertProg ProExpertProg removed the ready ONLY add when PR is ready to merge/full CI is needed label Oct 7, 2025
@ProExpertProg ProExpertProg disabled auto-merge October 7, 2025 23:01
@ProExpertProg ProExpertProg added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 8, 2025
Copy link
Collaborator

@ProExpertProg ProExpertProg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one remaining nit

Comment on lines +126 to +127
# Initialize post-load attention weights for both Attention and MLA.
# NOTE: Happens after other modules so we can easily decompress weights.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice find!

Copy link

mergify bot commented Oct 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @therealnaveenkamal.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Oct 8, 2025
@mergify mergify bot removed the needs-rebase label Oct 8, 2025
@ProExpertProg ProExpertProg enabled auto-merge (squash) October 8, 2025 19:23
@simon-mo simon-mo disabled auto-merge October 9, 2025 00:11
@simon-mo simon-mo merged commit e614ab7 into vllm-project:main Oct 9, 2025
57 of 59 checks passed
mrasquinha-g pushed a commit to mrasquinha-g/vllm that referenced this pull request Oct 9, 2025
Signed-off-by: Naveenraj Kamalakannan <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
zhiyuan1i pushed a commit to zhiyuan1i/vllm that referenced this pull request Oct 9, 2025
Signed-off-by: Naveenraj Kamalakannan <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
845473182 pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Oct 10, 2025
…to loader

* 'loader' of https://github.com/dsxsteven/vllm_splitPR: (778 commits)
  [torchao] Add support for ModuleFqnToConfig using regex (vllm-project#26001)
  Add: Support for multiple hidden layers in Eagle3 (vllm-project#26164)
  Enable `RMSNorm` substitution for Transformers backend (vllm-project#26353)
  [Model] Gemma3: Fix GGUF loading and quantization (vllm-project#26189)
  Bump Flashinfer to v0.4.0 (vllm-project#26326)
  Update Dockerfile and install runai-model-streamer[gcs] package (vllm-project#26464)
  [Core] Relax the LoRA  max rank (vllm-project#26461)
  [CI/Build] Fix model nightly tests (vllm-project#26466)
  [Hybrid]: Decouple Kernel Block Size from KV Page Size (vllm-project#24486)
  [Core][KVConnector] Propagate all tokens on resumed preemptions (vllm-project#24926)
  [MM][Doc] Add documentation for configurable mm profiling (vllm-project#26200)
  [Hardware][AMD] Enable FlexAttention backend on ROCm (vllm-project#26439)
  [Bugfix] Incorrect another MM data format in vllm bench throughput (vllm-project#26462)
  [Bugfix] Catch and log invalid token ids in detokenizer #2 (vllm-project#26445)
  [Minor] Change warning->warning_once in preprocess (vllm-project#26455)
  [Bugfix] Set the minimum python version for gpt-oss (vllm-project#26392)
  [Misc] Redact ray runtime env before logging (vllm-project#26302)
  Separate MLAAttention class from Attention (vllm-project#25103)
  [Attention] Register FLASHMLA_SPARSE (vllm-project#26441)
  [Kernels] Modular kernel refactor (vllm-project#24812)
  ...
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Naveenraj Kamalakannan <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Signed-off-by: xuebwang-amd <[email protected]>
Dhruvilbhatt pushed a commit to Dhruvilbhatt/vllm that referenced this pull request Oct 14, 2025
Signed-off-by: Naveenraj Kamalakannan <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Signed-off-by: Dhruvil Bhatt <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek Related to DeepSeek models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding tpu Related to Google TPUs v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Refactor]: Make an common MLAAttention Layer and custom OP

7 participants