Skip to content

llama : fix build_ffn without gate #13336

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 6, 2025
Merged

llama : fix build_ffn without gate #13336

merged 3 commits into from
May 6, 2025

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented May 6, 2025

While porting these build_* function to clip.cpp, I came across the case of FFN not having gate (which is quite common in vision transformers)

The current logic in llm_graph_context::build_ffn kinda assumes that gate always exist (which make sense, since most - if not all - modern LLM have up/gate/down FFN). If the gate is missing, this causes the logic to calculate cur = up_state * up_state which results in an incorrect result.

I'm not sure if the code is intended to be written this way, but I think we should either:

  • add gate to the check as proposed in this PR
  • or, GGML_ASSERT(gate) to make sure the gate is always there

Please note that when porting to ViT, I only copy the case of LLM_FFN_PAR and not LLM_FFN_SEQ

@ngxson ngxson requested review from ggerganov and slaren May 6, 2025 08:54
@ngxson ngxson merged commit 2f54e34 into master May 6, 2025
45 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request May 6, 2025
* origin/master: (27 commits)
llama : fix build_ffn without gate (ggml-org#13336)
CUDA: fix bad asserts for partial offload (ggml-org#13337)
convert : qwen2/3moe : set yarn metadata if present (ggml-org#13331)
CUDA: fix --split-mode row for MMQ (ggml-org#13323)
gguf-py : avoid requiring pyside6 for other scripts (ggml-org#13036)
CUDA: fix logic for clearing padding with -ngl 0 (ggml-org#13320)
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (ggml-org#13264)
server : Webui - change setText command from parent window to also send the message. (ggml-org#13309)
mtmd : rename llava directory to mtmd (ggml-org#13311)
clip : fix confused naming ffn_up and ffn_down (ggml-org#13290)
convert : bailingmoe : set yarn metadata if present (ggml-org#13312)
SYCL: Disable mul_mat kernels for noncontiguous tensor b (ggml-org#13308)
mtmd : add C public API (ggml-org#13184)
rpc : use backend registry, support dl backends (ggml-org#13304)
ggml : activate s390x simd for Q3_K (ggml-org#13301)
llava/mtmd : fixes to fully support dl backends (ggml-org#13303)
llama : build windows releases with dl backends (ggml-org#13220)
CUDA: fix race condition in MMQ stream-k fixup (ggml-org#13299)
CUDA: fix race condition in MMQ ids_dst (ggml-org#13294)
vulkan: Additional type support for unary, binary, and copy (ggml-org#13266)
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants