-
Notifications
You must be signed in to change notification settings - Fork 12k
ggml : fix I8MM Q4_1 scaling factor conversion #10562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ggml/src/ggml-cpu/ggml-cpu-quants.c
Outdated
float32_t _scale[4] = { | ||
GGML_FP16_TO_FP32(b_x0->d)*GGML_FP16_TO_FP32(b_y0->d), | ||
GGML_FP16_TO_FP32(b_x0->d)*GGML_FP16_TO_FP32(b_y1->d), | ||
GGML_FP16_TO_FP32(b_x1->d)*GGML_FP16_TO_FP32(b_y0->d), | ||
GGML_FP16_TO_FP32(b_x1->d)*GGML_FP16_TO_FP32(b_y1->d)}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes a bug where the y->d
was not converted to F32, resulting in completely wrong numbers when going through this CPU branch.
ggml/src/ggml-cpu/ggml-cpu-quants.c
Outdated
@@ -1759,66 +1759,76 @@ void ggml_vec_dot_q4_0_q8_0(int n, float * restrict s, size_t bs, const void * r | |||
const block_q8_0 * restrict vy0 = vy; | |||
const block_q8_0 * restrict vy1 = (const block_q8_0 *) ((const uint8_t*)vy + by); | |||
|
|||
float32x4_t sumv0 = vdupq_n_f32(0.0f); | |||
if (ggml_cpu_has_matmul_int8()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to remove the ARM runtime feature detection completely, it doesn't work at all and never will. So I would prefer if at least we don't make that task worse by adding more checks like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will change the PR to just include the F16 -> F32 fix in the Q4_1
kernel.
46a4ed0
to
1a6a669
Compare
1a6a669
to
5acff8f
Compare
target #10561
These changes fixillegal instruction
crash on M1 Pro which does not do a runtime check for the availability of I8MM. We now checkggml_cpu_has_matmul_int8()
and if it is false, we unpack the 2x2 multiplication into 4 dot products.This fix aside, I am wondering if we should drop the
int nrc
support in theggml_vec_dot
kernels to keep it simple and proceed to implement proper GEMMs similar to the work inggml-cpu-aarch64.c
?