Skip to content

Conversation

jcaip
Copy link
Contributor

@jcaip jcaip commented Sep 6, 2024

Added a deprecation warning for using int8_dynamic_activation_int8_semi_sparse_weight(), we should push users to use the new LayoutType option instead:

from torchao.dtypes import SemiSparseLayoutType

model = model.cuda().half()
quantize_(model, int8_dynamic_activation_int8_weight(layout_type=SemiSparseLayoutType()))

Also update the sparsity README to add marlin LLama3 benchmarks and outline our supported APIs for sparsity.

Copy link

pytorch-bot bot commented Sep 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/825

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 55ce0a8 with merge base 65d86c6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 6, 2024
@jcaip jcaip marked this pull request as ready for review September 6, 2024 17:04
@jcaip jcaip merged commit 92dcc62 into main Sep 6, 2024
17 checks passed
jainapurva pushed a commit that referenced this pull request Sep 9, 2024
* update docstrigs

* updated README

* update docs

* updated docs

* updated

* fix

* update doc

* update png

* fix affine quantized test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants