Skip to content

Commit 2a4bcba

Browse files
authored
llama : remove n_threads from llama_decode_internal (#3614)
This commit removes `n_threads` from the `llama_decode_internal` functions doc comment as it does not exist anymore. It looks like this parameter was removed in Commit 16bc66d ("llama.cpp : split llama_context_params into model and context params"). Signed-off-by: Daniel Bevenius <[email protected]>
1 parent 424b638 commit 2a4bcba

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

llama.cpp

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5721,7 +5721,6 @@ static struct ggml_cgraph * llama_build_graph(
57215721
//
57225722
// - lctx: llama context
57235723
// - batch: batch to evaluate
5724-
// - n_threads: number of threads to use
57255724
//
57265725
// return 0 on success
57275726
// return positive int on warning

0 commit comments

Comments
 (0)