Skip to content

Vulkan IQ4_NL Support #8613

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 23, 2024
Merged

Vulkan IQ4_NL Support #8613

merged 3 commits into from
Jul 23, 2024

Conversation

0cc4m
Copy link
Collaborator

@0cc4m 0cc4m commented Jul 21, 2024


@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jul 21, 2024
@sorasoras
Copy link

How much effort is needed to support Iq4xs in additional to iq4nl?
vulkan backend would be a lot more useful if iq4xs is supported.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Jul 21, 2024

How much effort is needed to support Iq4xs in additional to iq4nl? vulkan backend would be a lot more useful if iq4xs is supported.

Can you elaborate what specific cases that would enable?

@sorasoras
Copy link

How much effort is needed to support Iq4xs in additional to iq4nl? vulkan backend would be a lot more useful if iq4xs is supported.

Can you elaborate what specific cases that would enable?

IQ4XS is common used among community due to its small size and better PPL than Q4KM. It s a sweet spot in GGUF quant series.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Jul 21, 2024

How much effort is needed to support Iq4xs in additional to iq4nl? vulkan backend would be a lot more useful if iq4xs is supported.

Can you elaborate what specific cases that would enable?

IQ4XS is common used among community due to its small size and better PPL than Q4KM. It s a sweet spot in GGUF quant series.

It's quite a bit of effort, but at least it's easier than the other i-quants. I can't do it now, but should be able to at some point in the not-too-distant future.

@oldgithubman
Copy link

Where there's a will, there's a way =;-)

@slaren
Copy link
Member

slaren commented Jul 21, 2024

While testing this I got tests failures with fp16/fp32 mul mat, but it also happens on master.

Vulkan0: NVIDIA GeForce RTX 3090 Ti (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32
  MUL_MAT(type_a=f32,type_b=f32,m=71,n=82,k=367,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.791590151 > 0.000500000 FAIL
  MUL_MAT(type_a=f32,type_b=f32,m=73,n=31,k=10,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.618188214 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=42,n=85,k=77,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.844454779 > 0.000500000 FAIL
  MUL_MAT(type_a=f32,type_b=f32,m=106,n=17,k=50,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.509752347 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=80,n=110,k=345,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.739823578 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=73,n=46,k=361,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.642865213 > 0.000500000 FAIL
  MUL_MAT(type_a=f32,type_b=f32,m=18,n=27,k=153,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.802821297 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=12,n=80,k=182,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.590408512 > 0.000500000 FAIL
  MUL_MAT(type_a=f32,type_b=f32,m=110,n=42,k=6,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.487417955 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=98,n=56,k=484,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 0.988843480 > 0.000500000 FAIL
  MUL_MAT(type_a=f32,type_b=f32,m=8,n=22,k=223,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.419249893 > 0.000500000 FAIL
  MUL_MAT(type_a=f16,type_b=f32,m=63,n=51,k=452,bs=[1,1],nr=[1,1]): [MUL_MAT] NMSE = 1.033080833 > 0.000500000 FAIL

@0cc4m 0cc4m merged commit 751fcfc into master Jul 23, 2024
54 checks passed
@0cc4m 0cc4m deleted the 0cc4m/vulkan-iq4_nl branch July 23, 2024 08:56
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Jul 27, 2024
* Fix Vulkan matmul tests compile errors

* Add Vulkan IQ4_NL support

* Fix Vulkan DeepSeek-Coder-V2-Lite MoE support
@@ -3431,7 +3451,7 @@ static void ggml_vk_mul_mat_id_q_f16(ggml_backend_vk_context * ctx, vk_context *

const uint64_t nei0 = ids->ne[0];
const uint64_t nei1 = ids->ne[1];
GGML_ASSERT(nei0 * nei1 <= 2048);
GGML_ASSERT(nei0 * nei1 <= 3072);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @0cc4m can I check, what exactly is this assert testing for?

ref: LostRuins#1337

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deepseek 16B MoE (6/64 experts)

nei0 = 6
nei1 = 1024

nei0 x nei1 = 6144

mccoylstevens pushed a commit to mccoylstevens/llama.cpp that referenced this pull request May 15, 2025
* Fix Vulkan matmul tests compile errors

* Add Vulkan IQ4_NL support

* Fix Vulkan DeepSeek-Coder-V2-Lite MoE support
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants