Skip to content

Commit a5cacb2

Browse files
authored
imatrix : add README.md
1 parent 9b75cb2 commit a5cacb2

File tree

1 file changed

+32
-0
lines changed

1 file changed

+32
-0
lines changed

examples/imatrix/README.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# llama.cpp/examples/imatrix
2+
3+
Compute an importance matrix for a model and given text dataset. Can be used during quantization to enchance the quality of the quantum models.
4+
More information is available here: https://github.com/ggerganov/llama.cpp/pull/4861
5+
6+
## Usage
7+
8+
```
9+
./imatrix -m <some_fp_model> -f <some_training_data> [-o <output_file>] [--verbosity <verbosity_level>]
10+
[-ofreq num_chunks] [-ow <0 or 1>] [other common params]
11+
```
12+
13+
Here `-m` with a model name and `-f` with a file containing training data (such as e.g. `wiki.train.raw`) are mandatory.
14+
The parameters in square brackets are optional and have the following meaning:
15+
* `-o` (or `--output-file`) specifies the name of the file where the computed data will be stored. If missing `imatrix.dat` is used.
16+
* `--verbosity` specifies the verbosity level. If set to `0`, no output other than the perplexity of the processed chunks will be generated. If set to `1`, each time the results are saved a message is written to `stderr`. If `>=2`, a message is output each time data is collected for any tensor. Default verbosity level is `1`.
17+
* `-ofreq` (or `--output-frequency`) specifies how often the so far computed result is saved to disk. Default is 10 (i.e., every 10 chunks)
18+
* `-ow` (or `--output-weight`) specifies if data will be collected for the `output.weight` tensor. My experience is that it is better to not utilize the importance matrix when quantizing `output.weight`, so this is set to `false` by default.
19+
20+
For faster computation, make sure to use GPU offloading via the `-ngl` argument
21+
22+
## Example
23+
24+
```bash
25+
LLAMA_CUBLAS=1 make -j
26+
27+
# generate importance matrix (imatrix.dat)
28+
./imatrix -m ggml-model-f16.gguf -f train-data.txt -ngl 99
29+
30+
# use the imatrix to perform a Q4_K_M quantization
31+
./quantize --imatrix imatrix.dat ggml-model-f16.gguf ./ggml-model-q4_k_m.gguf q4_k_m
32+
```

0 commit comments

Comments
 (0)