Skip to content

metal lowbit kernels: qmv_fast optimization #2167

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 9, 2025
Merged

Conversation

manuelcandales
Copy link
Contributor

@manuelcandales manuelcandales commented May 3, 2025

Summary

This PR does the following modifications:

  • Previously, the metal lowbit kernels consumed the scales and zeros transposed, (i.e. num_groups x N). This PR changes that, so that now, the scales and zeros are consumed in the same shape as produced by the _quantize function, (i.e. N x num_groups)
  • Packing for 3, 5, 6, & 7 bit is changed. The bits are stored contiguously now, jumping to the next byte when we reach the end of the current byte.
  • qmv_fast optimization from MLX is adapted and expanded to support 1, 5 & 7 bits (not currently supported in MLX)

Performance Improvement

The following tables show the impact of this optimization on Llama3.1/3.2 decode, using torchchat + mps compile for text generation on an M1 Max 64GB (24 GPU cores, 10 CPU cores)

Llama 3.1-8B

python3 torchchat.py generate llama3.1-base --device mps --dtype float16 --quantize '{"linear:afpwx": {"bitwidth": #BITS, "groupsize": 64}}' --prompt "Once upon a time," --num-samples 5 --compile

# bits Before After Improvement
1 12.48 51.10 309% (4.09x)
2 42.23 54.37 29%
3 32.82 48.05 46%
4 36.87 50.51 37%
5 11.46 38.58 237% (3.37x)
6 11.30 35.48 214% (3.14x)
7 11.11 32.69 194% (2.94x)

Llama 3.2-3B

python3 torchchat.py generate llama3.2-3b-base --device mps --dtype float16 --quantize '{"linear:afpwx": {"bitwidth": #BITS, "groupsize": 64}}' --prompt "Once upon a time," --num-samples 5 --compile

# bits Before After Improvement
1 24.38 87.01 257% (3.57x)
2 77.70 98.13 26%
3 62.46 85.69 37%
4 64.74 89.54 38%
5 22.26 70.19 215% (3.15x)
6 21.83 63.62 191% (2.91x)
7 21.89 64.19 193% (2.93x)

Llama 3.2-1B

python3 torchchat.py generate llama3.2-1b-base --device mps --dtype float16 --quantize '{"linear:afpwx": {"bitwidth": #BITS, "groupsize": 64}}' --prompt "Once upon a time," --num-samples 5 --compile

# bits Before After Improvement
1 55.60 179.96 224% (3.34x)
2 159.77 186.91 17%
3 137.31 170.62 24%
4 145.55 175.15 20%
5 48.22 147.10 205% (3.05x)
6 47.98 140.51 193% (2.93x)
7 47.95 131.27 174% (2.74x)

Performance Summary

The table below summarizes torchchat's speed (tokens/second) on Metal backend on M1 Max after this change

# bits Llama 3.2-1B Llama 3.2-3B Llama 3.1-8B
1 179.96 87.01 51.10
2 186.91 98.13 54.37
3 170.62 85.69 48.05
4 175.15 89.54 50.51
5 147.10 70.19 38.58
6 140.51 63.62 35.48
7 131.27 64.19 32.69

Copy link

pytorch-bot bot commented May 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2167

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit c200b29 with merge base 4850998 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 3, 2025
@manuelcandales manuelcandales added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label May 3, 2025
@manuelcandales manuelcandales requested review from kimishpatel and removed request for kimishpatel May 3, 2025 03:34
@manuelcandales manuelcandales merged commit b95cf18 into main May 9, 2025
18 checks passed
@kimishpatel
Copy link
Contributor

oh this is the one you were talking about

@@ -64,12 +64,11 @@ using namespace metal;
@param [in] B is weight matrix of size M x K. Each byte contains 2 4-bit
values, along K dim, packed together.
@param [in] scales_ptr is scales ptr corresponding each
output channel x groups. These are packed as [num_groups = ceil(K / group_size), N]. N = output
output channel x groups. These are packed as [N, num_groups = ceil(K / group_size)]. N = output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this work for gemm as well?

Comment on lines +11 to +18
w[0] = float(b[0] & 0x07);
w[1] = float((b[0] & 0x38) >> 3);
w[2] = float(((b[0] & 0xc0) >> 6) | ((b[1] & 0x01) << 2));
w[3] = float((b[1] & 0x0e) >> 1);
w[4] = float((b[1] & 0x70) >> 4);
w[5] = float(((b[1] & 0x80) >> 7) | ((b[2] & 0x03) << 1));
w[6] = float((b[2] & 0x1c) >> 2);
w[7] = float((b[2] & 0xe0) >> 5);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am definitely surprised that this is better

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants