Skip to content

Commit 0a54239

Browse files
authored
update documentation for 3x API (#1923)
Signed-off-by: chensuyue <[email protected]> Signed-off-by: xin3he <[email protected]> Signed-off-by: yiliu30 <[email protected]>
1 parent be42d03 commit 0a54239

23 files changed

+152
-499
lines changed

README.md

Lines changed: 28 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -116,54 +116,58 @@ quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloade
116116
</thead>
117117
<tbody>
118118
<tr>
119-
<td colspan="2" align="center"><a href="./docs/source/design.md#architecture">Architecture</a></td>
120-
<td colspan="2" align="center"><a href="./docs/source/design.md#workflow">Workflow</a></td>
121-
<td colspan="1" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
122-
<td colspan="1" align="center"><a href="./docs/source/llm_recipes.md">LLMs Recipes</a></td>
123-
<td colspan="2" align="center"><a href="examples/README.md">Examples</a></td>
119+
<td colspan="2" align="center"><a href="./docs/3x/design.md#architecture">Architecture</a></td>
120+
<td colspan="2" align="center"><a href="./docs/3x/design.md#workflow">Workflow</a></td>
121+
<td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
122+
<td colspan="1" align="center"><a href="./docs/3x/llm_recipes.md">LLMs Recipes</a></td>
123+
<td colspan="1" align="center">Examples</td>
124124
</tr>
125125
</tbody>
126126
<thead>
127127
<tr>
128-
<th colspan="8">Python-based APIs</th>
128+
<th colspan="8">PyTorch Extension APIs</th>
129129
</tr>
130130
</thead>
131131
<tbody>
132132
<tr>
133-
<td colspan="2" align="center"><a href="./docs/source/quantization.md">Quantization</a></td>
134-
<td colspan="2" align="center"><a href="./docs/source/mixed_precision.md">Advanced Mixed Precision</a></td>
135-
<td colspan="2" align="center"><a href="./docs/source/pruning.md">Pruning (Sparsity)</a></td>
136-
<td colspan="2" align="center"><a href="./docs/source/distillation.md">Distillation</a></td>
133+
<td colspan="2" align="center"><a href="./docs/3x/PyTorch.md">Overview</a></td>
134+
<td colspan="2" align="center"><a href="./docs/3x/PT_StaticQuant.md">Static Quantization</a></td>
135+
<td colspan="2" align="center"><a href="./docs/3x/PT_DynamicQuant.md">Dynamic Quantization</a></td>
136+
<td colspan="2" align="center"><a href="./docs/3x/PT_SmoothQuant.md">Smooth Quantization</a></td>
137137
</tr>
138138
<tr>
139-
<td colspan="2" align="center"><a href="./docs/source/orchestration.md">Orchestration</a></td>
140-
<td colspan="2" align="center"><a href="./docs/source/benchmark.md">Benchmarking</a></td>
141-
<td colspan="2" align="center"><a href="./docs/source/distributed.md">Distributed Compression</a></td>
142-
<td colspan="2" align="center"><a href="./docs/source/export.md">Model Export</a></td>
139+
<td colspan="4" align="center"><a href="./docs/3x/PT_WeightOnlyQuant.md">Weight-Only Quantization</a></td>
140+
<td colspan="2" align="center"><a href="./docs/3x/PT_MXQuant.md">MX Quantization</a></td>
141+
<td colspan="2" align="center"><a href="./docs/3x/PT_MixedPrecision.md">Mixed Precision</a></td>
143142
</tr>
144143
</tbody>
145144
<thead>
146145
<tr>
147-
<th colspan="8">Advanced Topics</th>
146+
<th colspan="8">Tensorflow Extension APIs</th>
148147
</tr>
149148
</thead>
150149
<tbody>
151150
<tr>
152-
<td colspan="2" align="center"><a href="./docs/source/adaptor.md">Adaptor</a></td>
153-
<td colspan="2" align="center"><a href="./docs/source/tuning_strategies.md">Strategy</a></td>
154-
<td colspan="2" align="center"><a href="./docs/source/distillation_quantization.md">Distillation for Quantization</a></td>
155-
<td colspan="2" align="center"><a href="./docs/source/smooth_quant.md">SmoothQuant</td>
151+
<td colspan="3" align="center"><a href="./docs/3x/TensorFlow.md">Overview</a></td>
152+
<td colspan="3" align="center"><a href="./docs/3x/TF_Quant.md">Static Quantization</a></td>
153+
<td colspan="2" align="center"><a href="./docs/3x/TF_SQ.md">Smooth Quantization</a></td>
156154
</tr>
155+
</tbody>
156+
<thead>
157157
<tr>
158-
<td colspan="4" align="center"><a href="./docs/source/quantization_weight_only.md">Weight-Only Quantization (INT8/INT4/FP4/NF4) </td>
159-
<td colspan="2" align="center"><a href="https://github.com/intel/neural-compressor/blob/fp8_adaptor/docs/source/fp8.md">FP8 Quantization </td>
160-
<td colspan="2" align="center"><a href="./docs/source/quantization_layer_wise.md">Layer-Wise Quantization </td>
158+
<th colspan="8">Other Modules</th>
159+
</tr>
160+
</thead>
161+
<tbody>
162+
<tr>
163+
<td colspan="4" align="center"><a href="./docs/3x/autotune.md">Auto Tune</a></td>
164+
<td colspan="4" align="center"><a href="./docs/3x/benchmark.md">Benchmark</a></td>
161165
</tr>
162166
</tbody>
163167
</table>
164168

165-
> **Note**:
166-
> Further documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).
169+
> **Note**:
170+
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.
167171
168172
## Selected Publications/Events
169173
* Blog by Intel: [Neural Compressor: Boosting AI Model Efficiency](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Neural-Compressor-Boosting-AI-Model-Efficiency/post/1604740) (June 2024)
File renamed without changes.

docs/3x/PyTorch.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -194,6 +194,21 @@ def load(output_dir="./saved_results", model=None):
194194
<td class="tg-9wq8">&#10004</td>
195195
<td class="tg-9wq8"><a href="PT_DynamicQuant.md">link</a></td>
196196
</tr>
197+
<tr>
198+
<td class="tg-9wq8">MX Quantization</td>
199+
<td class="tg-9wq8"><a href=https://arxiv.org/pdf/2310.10537>Microscaling Data Formats for
200+
Deep Learning</a></td>
201+
<td class="tg-9wq8">PyTorch eager mode</td>
202+
<td class="tg-9wq8">&#10004</td>
203+
<td class="tg-9wq8"><a href="PT_MXQuant.md">link</a></td>
204+
</tr>
205+
<tr>
206+
<td class="tg-9wq8">Mixed Precision</td>
207+
<td class="tg-9wq8"><a href=https://arxiv.org/abs/1710.03740>Mixed precision</a></td>
208+
<td class="tg-9wq8">PyTorch eager mode</td>
209+
<td class="tg-9wq8">&#10004</td>
210+
<td class="tg-9wq8"><a href="PT_MixPrecision.md">link</a></td>
211+
</tr>
197212
<tr>
198213
<td class="tg-9wq8">Quantization Aware Training</td>
199214
<td class="tg-9wq8"><a href=https://pytorch.org/docs/master/quantization.html#quantization-aware-training-for-static-quantization>Quantization Aware Training</a></td>

docs/3x/design.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
Design
2+
=====
3+
4+
## Architecture
5+
6+
<a target="_blank" href="imgs/architecture.png">
7+
<img src="imgs/architecture.png" alt="Architecture">
8+
</a>
9+
10+
## Workflows
11+
12+
Intel® Neural Compressor provides two workflows: Quantization and Auto-tune.
13+
14+
<a target="_blank" href="imgs/workflow.png">
15+
<img src="imgs/workflow.png" alt="Workflow">
16+
</a>

docs/3x/get_started.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# Getting Started
2+
3+
1. [Quick Samples](#quick-samples)
4+
5+
2. [Feature Matrix](#feature-matrix)
6+
7+
## Quick Samples
8+
9+
```shell
10+
# Install Intel Neural Compressor
11+
pip install neural-compressor-pt
12+
```
13+
```python
14+
from transformers import AutoModelForCausalLM
15+
from neural_compressor.torch.quantization import RTNConfig, prepare, convert
16+
17+
user_model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
18+
quant_config = RTNConfig()
19+
prepared_model = prepare(model=user_model, quant_config=quant_config)
20+
quantized_model = convert(model=prepared_model)
21+
```
22+
23+
## Feature Matrix
24+
Intel Neural Compressor 3.X extends PyTorch and TensorFlow's APIs to support compression techniques.
25+
The below table provides a quick overview of the APIs available in Intel Neural Compressor 3.X.
26+
The Intel Neural Compressor 3.X mainly focuses on quantization-related features, especially for algorithms that benefit LLM accuracy and inference.
27+
It also provides some common modules across different frameworks. For example, Auto-tune support accuracy driven quantization and mixed precision, benchmark aimed to measure the multiple instances performance of the quantized model.
28+
29+
<table class="docutils">
30+
<thead>
31+
<tr>
32+
<th colspan="8">Overview</th>
33+
</tr>
34+
</thead>
35+
<tbody>
36+
<tr>
37+
<td colspan="2" align="center"><a href="design.md#architecture">Architecture</a></td>
38+
<td colspan="2" align="center"><a href="design.md#workflow">Workflow</a></td>
39+
<td colspan="2" align="center"><a href="https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html">APIs</a></td>
40+
<td colspan="1" align="center"><a href="llm_recipes.md">LLMs Recipes</a></td>
41+
<td colspan="1" align="center">Examples</td>
42+
</tr>
43+
</tbody>
44+
<thead>
45+
<tr>
46+
<th colspan="8">PyTorch Extension APIs</th>
47+
</tr>
48+
</thead>
49+
<tbody>
50+
<tr>
51+
<td colspan="2" align="center"><a href="PyTorch.md">Overview</a></td>
52+
<td colspan="2" align="center"><a href="PT_StaticQuant.md">Static Quantization</a></td>
53+
<td colspan="2" align="center"><a href="PT_DynamicQuant.md">Dynamic Quantization</a></td>
54+
<td colspan="2" align="center"><a href="PT_SmoothQuant.md">Smooth Quantization</a></td>
55+
</tr>
56+
<tr>
57+
<td colspan="3" align="center"><a href="PT_WeightOnlyQuant.md">Weight-Only Quantization</a></td>
58+
<td colspan="3" align="center"><a href="PT_MXQuant.md">MX Quantization</a></td>
59+
<td colspan="2" align="center"><a href="PT_MixedPrecision.md">Mixed Precision</a></td>
60+
</tr>
61+
</tbody>
62+
<thead>
63+
<tr>
64+
<th colspan="8">Tensorflow Extension APIs</th>
65+
</tr>
66+
</thead>
67+
<tbody>
68+
<tr>
69+
<td colspan="3" align="center"><a href="TensorFlow.md">Overview</a></td>
70+
<td colspan="3" align="center"><a href="TF_Quant.md">Static Quantization</a></td>
71+
<td colspan="2" align="center"><a href="TF_SQ.md">Smooth Quantization</a></td>
72+
</tr>
73+
</tbody>
74+
<thead>
75+
<tr>
76+
<th colspan="8">Other Modules</th>
77+
</tr>
78+
</thead>
79+
<tbody>
80+
<tr>
81+
<td colspan="4" align="center"><a href="autotune.md">Auto Tune</a></td>
82+
<td colspan="4" align="center"><a href="benchmark.md">Benchmark</a></td>
83+
</tr>
84+
</tbody>
85+
</table>
86+
87+
> **Note**:
88+
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.
File renamed without changes.

docs/3x/llm_recipes.md

Whitespace-only changes.

docs/source/user_guide.md renamed to docs/source/2x_user_guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
User Guide
1+
2.X API User Guide
22
===========================
33

44
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search to help the user optimize their model. The below documents could help you to get familiar with concepts and modules in Intel® Neural Compressor. Learn how to utilize the APIs in Intel® Neural Compressor to conduct quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks.
55

66
## Overview
7-
This part helps user to get a quick understand about design structure and workflow of Intel® Neural Compressor. We provided broad examples to help users get started.
7+
This part helps user to get a quick understand about design structure and workflow of 2.X Intel® Neural Compressor. We provided broad examples to help users get started.
88
<table class="docutils">
99
<tbody>
1010
<tr>
@@ -53,7 +53,7 @@ In 2.X API, it's very important to create the `DataLoader` and `Metrics` for you
5353
</table>
5454

5555
## Advanced Topics
56-
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor.
56+
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor 2.X API.
5757
<table class="docutils">
5858
<tbody>
5959
<tr>

docs/source/NAS.md

Lines changed: 0 additions & 86 deletions
This file was deleted.

docs/source/imgs/dynas.png

-63.4 KB
Binary file not shown.

0 commit comments

Comments
 (0)