You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Further documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).
169
+
> **Note**:
170
+
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.
167
171
168
172
## Selected Publications/Events
169
173
* Blog by Intel: [Neural Compressor: Boosting AI Model Efficiency](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Neural-Compressor-Boosting-AI-Model-Efficiency/post/1604740) (June 2024)
Intel Neural Compressor 3.X extends PyTorch and TensorFlow's APIs to support compression techniques.
25
+
The below table provides a quick overview of the APIs available in Intel Neural Compressor 3.X.
26
+
The Intel Neural Compressor 3.X mainly focuses on quantization-related features, especially for algorithms that benefit LLM accuracy and inference.
27
+
It also provides some common modules across different frameworks. For example, Auto-tune support accuracy driven quantization and mixed precision, benchmark aimed to measure the multiple instances performance of the quantized model.
> From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.
Copy file name to clipboardExpand all lines: docs/source/2x_user_guide.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
User Guide
1
+
2.X API User Guide
2
2
===========================
3
3
4
4
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search to help the user optimize their model. The below documents could help you to get familiar with concepts and modules in Intel® Neural Compressor. Learn how to utilize the APIs in Intel® Neural Compressor to conduct quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks.
5
5
6
6
## Overview
7
-
This part helps user to get a quick understand about design structure and workflow of Intel® Neural Compressor. We provided broad examples to help users get started.
7
+
This part helps user to get a quick understand about design structure and workflow of 2.X Intel® Neural Compressor. We provided broad examples to help users get started.
8
8
<tableclass="docutils">
9
9
<tbody>
10
10
<tr>
@@ -53,7 +53,7 @@ In 2.X API, it's very important to create the `DataLoader` and `Metrics` for you
53
53
</table>
54
54
55
55
## Advanced Topics
56
-
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor.
56
+
This part provides the advanced topics that help user dive deep into Intel® Neural Compressor 2.X API.
0 commit comments