File tree 1 file changed +5
-5
lines changed
1 file changed +5
-5
lines changed Original file line number Diff line number Diff line change @@ -52,31 +52,31 @@ Otherwise, while installing it will build the llama.ccp x86 version which will b
52
52
To install with OpenBLAS, set the ` LLAMA_BLAS and LLAMA_BLAS_VENDOR ` environment variables before installing:
53
53
54
54
``` bash
55
- CMAKE_ARGS=" -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
55
+ CMAKE_ARGS=" -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
56
56
```
57
57
58
58
To install with cuBLAS, set the ` LLAMA_CUBLAS=1 ` environment variable before installing:
59
59
60
60
``` bash
61
- CMAKE_ARGS=" -DLLAMA_CUBLAS=on" pip install llama-cpp-python
61
+ CMAKE_ARGS=" -DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
62
62
```
63
63
64
64
To install with CLBlast, set the ` LLAMA_CLBLAST=1 ` environment variable before installing:
65
65
66
66
``` bash
67
- CMAKE_ARGS=" -DLLAMA_CLBLAST=on" pip install llama-cpp-python
67
+ CMAKE_ARGS=" -DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install llama-cpp-python
68
68
```
69
69
70
70
To install with Metal (MPS), set the ` LLAMA_METAL=on ` environment variable before installing:
71
71
72
72
``` bash
73
- CMAKE_ARGS=" -DLLAMA_METAL=on" pip install llama-cpp-python
73
+ CMAKE_ARGS=" -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
74
74
```
75
75
76
76
To install with hipBLAS / ROCm support for AMD cards, set the ` LLAMA_HIPBLAS=on ` environment variable before installing:
77
77
78
78
``` bash
79
- CMAKE_ARGS=" -DLLAMA_HIPBLAS=on" pip install llama-cpp-python
79
+ CMAKE_ARGS=" -DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
80
80
```
81
81
82
82
#### Windows remarks
You can’t perform that action at this time.
0 commit comments