-
Notifications
You must be signed in to change notification settings - Fork 1.1k
CMAKE 'not a git repo' error #380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@jaymon0703 looks like an issue with the |
Thank you @abetlen please let me know if i can help with more info. |
I'm having the same issue too |
@jaymon0703 looking more closely I don't think that's the issue, the
Based on this: ggml-org/llama.cpp#1778 (comment) I would take a look at your cuda version and ensure it's up-to-date. @ncfx can you double-check your logs and see if you have the same issue? If not, please open a new issue so we can help. Cheers. |
According to ggml-org/llama.cpp#1778 upgrading to 11.4.2+ should resolve the issue. I am on 11.6. |
Ok, toolkit version is older (11.3.109)...let me try update and revert -- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109") |
Yes, please open a new issue for a new issue. |
Hi @jaymon0703, did you manage to compile llama-cpp-python by only updating the toolkit version? Which version did you use? I am experiencing the same issue as you originally had and I have not managed to compile it successfully. |
Sorry for late reply. Cuda version 12.1. Driver Version: 530.30.02. It is working now. |
Uh oh!
There was an error while loading. Please reload this page.
Thank you for this library. Works great on CPU. Trying to get working with GPU but getting errors building with 0.1.60+. Thank you! Builds fine using the same command for 0.1.59 and below.
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Expecting to build llama-cpp with Cmake using below commands:
Current Behavior
Getting this error:
Environment and Context
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Installing from Jupyter Notebook. Seems i can install older versions (eg. 0.1.48) that exclude GPU support. From 0.1.60 and above i get this error. For older versions i cannot use
n_gpu_layers
.$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
$ uname -a
Linux -tensorflow-gpu 5.10.0-23-cloud-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12) x86_64 GNU/Linux
Failure Information (for bugs)
As reported above, build fails with above error message, using
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Run below commands from terminal
!export LLAMA_CUBLAS=1
!export LLAMA_CLBLAST=1
!export CMAKE_ARGS=-DLLAMA_CUBLAS=on
!export FORCE_CMAKE=1
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
Note: Many issues seem to be regarding functional or performance issues / differences with
llama.cpp
. In these cases we need to confirm that you're comparing against the version ofllama.cpp
that was built with your python package, and which parameters you're passing to the context.The text was updated successfully, but these errors were encountered: