You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Using [GPT4All](https://github.com/nomic-ai/gpt4all)
230
+
231
+
- Obtain the `gpt4all-lora-quantized.bin` model
232
+
- It is distributed in the old `ggml` format which is not obsoleted. So you have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py):
- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models. The original model is stored in the same folder with a suffix `.orig`
239
+
225
240
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
226
241
227
242
-**Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this respository, including in issues, discussions or pull requests. They will be immediately deleted.**
0 commit comments