Skip to content

CUDA: fix logic for clearing padding with -ngl 0 #13320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

JohannesGaessler
Copy link
Collaborator

Fixes #13305 .

The problem is that in ggml_cuda_op_mul_mat the padding of temporary compute buffers is cleared just prior to kernel execution and when I wrote code to directly launch MMVQ and MMQ I forgot to add the corresponding functionality; this PR simply adds it. The changes in MMVQ should not actually be invoked unless the minimum number of tokens for evaluating the model on the GPU is lowered.

Prior to #13199 the logic for clearing the padding was already wrong, but in a different way. Instead of not clearing padding when it ought to be, because slices of src0 were passed to ggml_cuda_op_mul_mat that would cause valid tensor data to be cleared. I did not touch that logic, now that MUL_MAT_ID can be handled in a single kernel launch it should never be invoked (I also added an assert).

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels May 5, 2025
const size_t size_data = ggml_nbytes(src0);
const size_t size_alloc = ggml_backend_buffer_get_alloc_size(src0->buffer, src0);
if (size_alloc > size_data) {
CUDA_CHECK(cudaMemsetAsync((char *) src0->data + size_data, 0, size_alloc - size_data));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing the stream parameter.

const size_t size_data = ggml_nbytes(src0);
const size_t size_alloc = ggml_backend_buffer_get_alloc_size(src0->buffer, src0);
if (size_alloc > size_data) {
CUDA_CHECK(cudaMemsetAsync((char *) src0->data + size_data, 0, size_alloc - size_data));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also here.

@JohannesGaessler JohannesGaessler force-pushed the cuda-fix-deepseek-partial-offload branch from 4634789 to 108dfde Compare May 5, 2025 19:34
@JohannesGaessler JohannesGaessler force-pushed the cuda-fix-deepseek-partial-offload branch from 108dfde to ac78a42 Compare May 5, 2025 19:35
@JohannesGaessler JohannesGaessler merged commit 9070365 into ggml-org:master May 5, 2025
46 checks passed
@CISC
Copy link
Collaborator

CISC commented May 6, 2025

@JohannesGaessler test-backend-ops crashes now (see also #13329):

/home/ggml/work/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2073: GGML_ASSERT(!ggml_cuda_should_use_mmq(src0->type, cc, ne11) || ne00 % MATRIX_ROW_PADDING == 0) failed

gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request May 6, 2025
* origin/master: (27 commits)
llama : fix build_ffn without gate (ggml-org#13336)
CUDA: fix bad asserts for partial offload (ggml-org#13337)
convert : qwen2/3moe : set yarn metadata if present (ggml-org#13331)
CUDA: fix --split-mode row for MMQ (ggml-org#13323)
gguf-py : avoid requiring pyside6 for other scripts (ggml-org#13036)
CUDA: fix logic for clearing padding with -ngl 0 (ggml-org#13320)
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (ggml-org#13264)
server : Webui - change setText command from parent window to also send the message. (ggml-org#13309)
mtmd : rename llava directory to mtmd (ggml-org#13311)
clip : fix confused naming ffn_up and ffn_down (ggml-org#13290)
convert : bailingmoe : set yarn metadata if present (ggml-org#13312)
SYCL: Disable mul_mat kernels for noncontiguous tensor b (ggml-org#13308)
mtmd : add C public API (ggml-org#13184)
rpc : use backend registry, support dl backends (ggml-org#13304)
ggml : activate s390x simd for Q3_K (ggml-org#13301)
llava/mtmd : fixes to fully support dl backends (ggml-org#13303)
llama : build windows releases with dl backends (ggml-org#13220)
CUDA: fix race condition in MMQ stream-k fixup (ggml-org#13299)
CUDA: fix race condition in MMQ ids_dst (ggml-org#13294)
vulkan: Additional type support for unary, binary, and copy (ggml-org#13266)
...
Nexesenex added a commit to Nexesenex/croco.cpp that referenced this pull request May 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Eval bug: DeepSeek-R1-UD-Q2_K_XL output broken
3 participants