You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/models/llama2/README.md
+12-2Lines changed: 12 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -17,9 +17,9 @@ Please note that the models are subject to the [acceptable use policy](https://g
17
17
18
18
# Results
19
19
20
-
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
20
+
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
21
21
22
-
For Llama3, we can use the same process. Note that it's only supported in the ExecuTorch main branch.
22
+
For Llama3, we can use the same process. Note that it's only supported in the ExecuTorch main branch.
23
23
24
24
## Quantization:
25
25
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/).
@@ -243,6 +243,16 @@ Please refer to [this tutorial](https://pytorch.org/executorch/main/llm/llama-de
243
243
### Android
244
244
Please refer to [this tutorial](https://pytorch.org/executorch/main/llm/llama-demo-android.html) to for full instructions on building the Android LLAMA Demo App.
245
245
246
+
## Optional: Smaller models delegated to other backends
247
+
Currently we supported lowering the stories model to other backends, including, CoreML, MPS and QNN. Please refer to the instruction
248
+
for each backend ([CoreML](https://pytorch.org/executorch/main/build-run-coreml.html), [MPS](https://pytorch.org/executorch/main/build-run-mps.html), [QNN](https://pytorch.org/executorch/main/build-run-qualcomm.html)) before trying to lower them. After the backend library is installed, the script to export a lowered model is
The iOS LLAMA app supports the CoreML and MPS model and the Android LLAMA app supports the QNN model. On Android, it also allow to cross compiler the llama runner binary, push to the device and run.
255
+
246
256
# What is coming next?
247
257
## Quantization
248
258
- Enabling FP16 model to leverage smaller groupsize for 4-bit quantization.
0 commit comments