Skip to content

Commit 35938ee

Browse files
committed
llama : update logic for number of threads when using BLAS
1 parent 9217721 commit 35938ee

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

llama.cpp

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2942,7 +2942,12 @@ static bool llama_eval_internal(
29422942

29432943
// for big prompts, if BLAS is enabled, it is better to use only one thread
29442944
// otherwise, the threads are spin-lock waiting for the BLAS calls and are degrading the performance
2945-
n_threads = N >= 32 && ggml_cpu_has_blas() && !ggml_cpu_has_gpublas() ? 1 : n_threads;
2945+
// TODO: this is mostly important for Apple Silicon where CBLAS is still performing very well
2946+
// we still need some threads to process all non-mul_mat ops, but not too much to avoid interfering
2947+
// with the BLAS calls. need a better solution
2948+
if (N >= 32 && ggml_cpu_has_blas() && !ggml_cpu_has_gpublas()) {
2949+
n_threads = std::min(4, n_threads);
2950+
}
29462951

29472952
struct ggml_tensor * res = gf->nodes[gf->n_nodes - 1];
29482953
struct ggml_tensor * embeddings = gf->nodes[gf->n_nodes - 2];

0 commit comments

Comments
 (0)