Skip to content

Conversation

@ZhiweiYan-96
Copy link

@ZhiweiYan-96 ZhiweiYan-96 commented Nov 7, 2025

Purpose

To further take the memory and compute advantage of MXFP4 inference, this PR complement the MLA fp4 computing by replacing the current FP8 bmm with FP4 bmm, which should achieve higher speedup.

Details of Weight Unpacking

After loading weights, we have weights kv_b_proj with [num_heads * (qk_nope_head_dim + v_head_dim), kv_lora_rank // 2] shape. kv_lora_rank // 2 here means 2 FP4 elements packed into 1 byte uint8 elements. The accompained scale is [num_heads * (qk_noe_head_dim + v_head_dim), kv_lora_rank // 32]

The weight unpacking target layout is defined by the computing process k_up_proj and v_up_proj.

W_K and W_K_scale

As for k_up_proj, the high precision computation(fp8/bf16) formula is decode_q_nope @ W_K, where the shape is

[num_heads, batch_size, qk_nope_head_dim] @ [num_heads, qk_nope_head_im, kv_lora_rank]

From the GEMM kernel perspective, the input weight layout is [B, N, K //2]. Hence, the PR unpack kv_proj_weight and split W_K weights. Then reorder it to [num_heads, kv_lora_rank, qk_nope_head_dim // 2]. The corresponding W_K_scale is with shape [num_heads, kv_lora_rank, qk_nope_head_dim // 32]. The 32 here is the group size defined by OCP Specification.

W_V and W_V_scale

W_V and W_V_scale is used in v_up_proj process. The high precision formula can be describted as x @ W_V, where the shape is

[num_heads, batch_size, kv_lora_rank] @ [num_head, kv_lora_rank, v_head_dim]

As for the GEMM, perspective, the W_V should have shape [num_heads, v_head_dim, kv_lora_rank] and the corresponding scale is with shape [num_heads, v_head_dim, kv_lora_rank // 32]

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@ZhiweiYan-96 ZhiweiYan-96 changed the title Enable FP4 bmm for k_up_proj and v_up_proj in MLA [DO NOT MERGE]Enable FP4 bmm for k_up_proj and v_up_proj in MLA Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants