Skip to content

Commit a26e777

Browse files
authored
Update deepseek modeling file for PatchedVLLMKVCache (#1009)
Previously, when we use INC to convert deepseek FP8 model, we need this [commit ](intel/neural-compressor@7c0a3e2) to remove extra converts in KVCache but actually GC can remove them during graph optimization theoretically. Furthermore, the change in commit is not aligned with the design of INC patched module, which wants to keep the returned tensor BF16 because we can't make sure users' next operation. So, I update the modeling file to make GC can work for patched KVCache pattern of deepseek model. Since next release is very close and GC currently can not work as expection during decode stage, it is still a workround. We will root cause and fix it from source in next relase. This PR should work together with this PR: intel/neural-compressor#2165 Signed-off-by: Mengni Wang <[email protected]>
1 parent 109ac5d commit a26e777

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

vllm/attention/backends/hpu_attn.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -430,20 +430,20 @@ def forward(
430430
# write the latent and rope to kv cache
431431
if kv_cache is not None and len(kv_cache) == 2:
432432
if not self.VLLM_USE_FP8_MATMUL:
433-
k_cache = self.latent_cache_k(latent_vec_k, kv_cache[0],
434-
block_indices, block_offsets)
433+
self.latent_cache_k(latent_vec_k, kv_cache[0],
434+
block_indices, block_offsets)
435+
k_cache = kv_cache[0]
435436
else:
436437
k_cache = self.latent_cache_k_nodeq(latent_vec_k, kv_cache[0],
437438
block_indices,
438439
block_offsets)
439440
v_cache = None
440-
kv_cache = (k_cache, v_cache)
441441

442442
if is_prefill:
443443
return self._forward_prefill(q, k_c_normed, k_pe, attn_metadata,
444444
batch_size)
445445
else:
446-
return self._forward_decode(q_nope, q_pe, kv_cache, attn_metadata,
446+
return self._forward_decode(q_nope, q_pe, (k_cache, v_cache), attn_metadata,
447447
batch_size)
448448

449449
def _forward_prefill(self, q: torch.Tensor, k_c_normed: torch.Tensor,

0 commit comments

Comments
 (0)