Skip to content

Commit f4f5362

Browse files
authored
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
1 parent 863f65e commit f4f5362

File tree

1 file changed

+14
-12
lines changed

1 file changed

+14
-12
lines changed

README.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -219,25 +219,27 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
219219

220220
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
221221

222-
* The LLaMA models are officially distributed by Facebook and will never be provided through this repository. See this [pull request in Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to obtain access to the model data.
223-
* Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
224-
* The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
222+
- **Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this respository, including in issues, discussions or pull requests. They will be immediately deleted.**
223+
- The LLaMA models are officially distributed by Facebook and will **never** be provided through this repository.
224+
- Refer to [Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to request access to the model data.
225+
- Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
226+
- The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
225227

226228
`sha256sum --ignore-missing -c SHA256SUMS` on Linux
227229

228230
or
229231

230232
`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
231233

232-
* If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
233-
* LLaMA:
234-
* [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
235-
* [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
236-
* GPT-3
237-
* [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
238-
* GPT-3.5 / InstructGPT / ChatGPT:
239-
* [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
240-
* [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
234+
- If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
235+
- LLaMA:
236+
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
237+
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
238+
- GPT-3
239+
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
240+
- GPT-3.5 / InstructGPT / ChatGPT:
241+
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
242+
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
241243

242244
### Perplexity (Measuring model quality)
243245

0 commit comments

Comments
 (0)