Skip to content

move float8 inference README contents to prototype section #901

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 17, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 20 additions & 16 deletions torchao/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,22 +121,6 @@ from torchao.quantization.quant_api import change_linear_weights_to_int8_dqtenso
change_linear_weights_to_int8_dqtensors(model)
```

#### A16W8 Float8 WeightOnly Quantization

```python
# for torch 2.5+
from torchao.quantization import quantize_, float8_weight_only
quantize_(model, float8_weight_only())
```

#### A16W8 Float8 Dynamic Quantization with Rowwise Scaling

```python
# for torch 2.5+
from torchao.quantization.quant_api import quantize_, PerRow, float8_dynamic_activation_float8_weight
quantize_(model, float8_dynamic_activation_float8_weight(granularity=PerRow()))
```

#### A16W6 Floating Point WeightOnly Quantization

```python
Expand Down Expand Up @@ -303,6 +287,26 @@ You try can out these apis with the `quantize_` api as above alongside the const
### Automatic Inductor Configuration
The `quantize_` and `autoquant` apis now automatically use our recommended inductor configuration setings. You can mimic the same configuration settings for your own experiments by using the `torchao.quantization.utils.recommended_inductor_config_setter` to replicate our recommended configuration settings. Alternatively if you wish to disable these recommended settings, you can use the key word argument `set_inductor_config` and set it to false in the `quantize_` or `autoquant` apis to prevent assignment of those configuration settings. You can also overwrite these configuration settings after they are assigned if you so desire, as long as they are overwritten before passing any inputs to the torch.compiled model. This means that previous flows which referenced a variety of inductor configurations that needed to be set are now outdated, though continuing to manually set those same inductor configurations is unlikely to cause any issues.

### (prototype) A16W8 Float8 WeightOnly Quantization

```python
# for torch 2.5+
from torchao.quantization import quantize_, float8_weight_only
quantize_(model, float8_weight_only())
```

This API works today but has not been extensively tested and benchmarked yet. Hardware with CUDA compute capability 8.9 or greater is required.

### (prototype) A16W8 Float8 Dynamic Quantization with Rowwise Scaling

```python
# for torch 2.5+
from torchao.quantization.quant_api import quantize_, PerRow, float8_dynamic_activation_float8_weight
quantize_(model, float8_dynamic_activation_float8_weight(granularity=PerRow()))
```

This API works today but has not been extensively tested and benchmarked yet. Hardware with CUDA compute capability 8.9 or greater is required.

## (To be moved to prototype) A16W4 WeightOnly Quantization with GPTQ

```python
Expand Down
Loading