Skip to content

Commit a352941

Browse files
Vaibhavs10julien-c
andauthored
docs: improve llama.cpp install instructions. (#720)
Does two things: 1. Makes homebrew install the default and points users to build instructions for other platforms (note: our previous instructions were also only for Mac - this PRs makes the wording more generalised) 2. moves from `-m` to `--hf-file` this would make sure that the models are cached --------- Co-authored-by: Julien Chaumond <[email protected]>
1 parent f94376b commit a352941

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

packages/tasks/src/local-apps.ts

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -48,16 +48,19 @@ function isGgufModel(model: ModelData) {
4848

4949
const snippetLlamacpp = (model: ModelData): string[] => {
5050
return [
51-
`
52-
## Install and build llama.cpp with curl support
53-
git clone https://github.com/ggerganov/llama.cpp.git
51+
`## Install llama.cpp via brew
52+
brew install llama.cpp
53+
54+
## or from source with curl support
55+
## see llama.cpp README for compilation flags to optimize for your hardware
56+
git clone https://github.com/ggerganov/llama.cpp
5457
cd llama.cpp
5558
LLAMA_CURL=1 make
5659
`,
5760
`## Load and run the model
58-
./main \\
61+
llama \\
5962
--hf-repo "${model.id}" \\
60-
-m file.gguf \\
63+
--hf-file file.gguf \\
6164
-p "I believe the meaning of life is" \\
6265
-n 128`,
6366
];

0 commit comments

Comments
 (0)