-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Closed
Labels
feature requestNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomerstorch.compile
Description
🚀 The feature, motivation and pitch
Currently, group quantization is handled by a per_token_group_quant_fp8 custom CUDA kernel (with a Triton kernel fallback). We should fold this functionality into QuantFP8 to allow easier dispatching between CUDA, Triton, and torch implementations, automatic Inductor fusion, and easier custom op fusion.
Alternatives
No response
Additional context
This is related and complementary to #20711.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomerstorch.compile
Type
Projects
Status
Done