Skip to content

CMAKE 'not a git repo' error #380

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jaymon0703 opened this issue Jun 15, 2023 · 10 comments
Closed

CMAKE 'not a git repo' error #380

jaymon0703 opened this issue Jun 15, 2023 · 10 comments
Labels
bug Something isn't working build

Comments

@jaymon0703
Copy link

jaymon0703 commented Jun 15, 2023

Thank you for this library. Works great on CPU. Trying to get working with GPU but getting errors building with 0.1.60+. Thank you! Builds fine using the same command for 0.1.59 and below.

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [*] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [*] I carefully followed the README.md.
  • [*] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [*] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Expecting to build llama-cpp with Cmake using below commands:

!export LLAMA_CUBLAS=1
!export LLAMA_CLBLAST=1 
!export CMAKE_ARGS=-DLLAMA_CUBLAS=on
!export FORCE_CMAKE=1
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

Current Behavior

Getting this error:

Collecting llama-cpp-python
  Downloading llama_cpp_python-0.1.63.tar.gz (1.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 21.0 MB/s eta 0:00:0000:010:01
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
  Downloading typing_extensions-4.6.3-py3-none-any.whl (31 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
  Downloading numpy-1.24.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 144.1 MB/s eta 0:00:00a 0:00:01
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 213.5 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [118 lines of output]
      
      
      --------------------------------------------------------------------------------
      -- Trying 'Ninja' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      Not searching for unused variables given on the command line.
      -- The C compiler identification is GNU 10.2.1
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- The CXX compiler identification is GNU 10.2.1
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Configuring done (0.3s)
      -- Generating done (0.0s)
      -- Build files have been written to: /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/_cmake_test_compile/build
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Ninja' generator - success
      --------------------------------------------------------------------------------
      
      Configuring Project
        Working directory:
          /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/_skbuild/linux-x86_64-3.10/cmake-build
        Command:
          /var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.10 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/opt/conda/bin/python3.10 -DPYTHON_INCLUDE_DIR:PATH=/opt/conda/include/python3.10 -DPYTHON_LIBRARY:PATH=/opt/conda/lib/libpython3.10.so -DPython_EXECUTABLE:PATH=/opt/conda/bin/python3.10 -DPython_ROOT_DIR:PATH=/opt/conda -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/opt/conda/include/python3.10 -DPython3_EXECUTABLE:PATH=/opt/conda/bin/python3.10 -DPython3_ROOT_DIR:PATH=/opt/conda -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/opt/conda/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja -DLLAMA_CUBLAS=on -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_CUBLAS=on
      
      Not searching for unused variables given on the command line.
      -- The C compiler identification is GNU 10.2.1
      -- The CXX compiler identification is GNU 10.2.1
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.30.2")
      fatal: not a git repository (or any of the parent directories): .git
      fatal: not a git repository (or any of the parent directories): .git
      CMake Warning at vendor/llama.cpp/CMakeLists.txt:111 (message):
        Git repository not found; to enable automatic generation of build info,
        make sure Git is installed and the project is a Git repository.
      
      
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
      -- Check if compiler accepts -pthread
      -- Check if compiler accepts -pthread - yes
      -- Found Threads: TRUE
      -- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109")
      -- cuBLAS found
      -- The CUDA compiler identification is NVIDIA 11.3.109
      -- Detecting CUDA compiler ABI info
      -- Detecting CUDA compiler ABI info - done
      -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
      -- Detecting CUDA compile features
      -- Detecting CUDA compile features - done
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      -- GGML CUDA sources found, configuring CUDA architecture
      -- Configuring done (1.9s)
      -- Generating done (0.0s)
      -- Build files have been written to: /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/_skbuild/linux-x86_64-3.10/cmake-build
      [1/6] Building CUDA object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_DMMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -I/var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/. -isystem /usr/local/cuda/include -O3 -DNDEBUG -std=c++11 -Xcompiler=-fPIC -mf16c -mfma -mavx -mavx2 -Xcompiler -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o.d -x cu -c /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1356): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1357): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1643): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1644): error: identifier "cublasGetStatusString" is undefined
      
      4 errors detected in the compilation of "/var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu".
      [2/6] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o
      [3/6] Building CXX object vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
      [4/6] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
      ninja: build stopped: subcommand failed.
      Traceback (most recent call last):
        File "/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
          cmkr.make(make_args, install_target=cmake_install_target, env=env)
        File "/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 697, in make
          self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
        File "/var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 742, in make_impl
          raise SKBuildError(msg)
      
      An error occurred while building with CMake.
        Command:
          /var/tmp/pip-build-env-jr1p4o6c/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake --build . --target install --config Release --
        Install target:
          install
        Source directory:
          /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b
        Working directory:
          /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/_skbuild/linux-x86_64-3.10/cmake-build
      Please check the install target is valid and see CMake's output for more information.

Environment and Context

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Installing from Jupyter Notebook. Seems i can install older versions (eg. 0.1.48) that exclude GPU support. From 0.1.60 and above i get this error. For older versions i cannot use n_gpu_layers.

$ lscpu

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 45 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

$ uname -a
Linux -tensorflow-gpu 5.10.0-23-cloud-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12) x86_64 GNU/Linux

$ python3 --version
Python 3.10.10

$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu

$ g++ --version
g++ (Debian 10.2.1-6) 10.2.1 20210110

Failure Information (for bugs)

As reported above, build fails with above error message, using CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

Run below commands from terminal
!export LLAMA_CUBLAS=1
!export LLAMA_CLBLAST=1
!export CMAKE_ARGS=-DLLAMA_CUBLAS=on
!export FORCE_CMAKE=1
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

@gjmulder gjmulder added build hardware Hardware specific issue labels Jun 15, 2023
@abetlen
Copy link
Owner

abetlen commented Jun 16, 2023

@jaymon0703 looks like an issue with the build-info.h that's supposed to be generated for debug purposes for llama.cpp. I'll try to reproduce this.

@gjmulder gjmulder added bug Something isn't working and removed hardware Hardware specific issue labels Jun 16, 2023
@jaymon0703
Copy link
Author

Thank you @abetlen please let me know if i can help with more info.

@ncfx
Copy link

ncfx commented Jun 16, 2023

I'm having the same issue too

@abetlen
Copy link
Owner

abetlen commented Jun 18, 2023

@jaymon0703 looking more closely I don't think that's the issue, the fatal: not a git repo is coming from the git command, however cmake only registers this as a warning and ignores generating the build-info.h. Your actual error is:

[1/6] Building CUDA object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_DMMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -I/var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/. -isystem /usr/local/cuda/include -O3 -DNDEBUG -std=c++11 -Xcompiler=-fPIC -mf16c -mfma -mavx -mavx2 -Xcompiler -pthread -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o.d -x cu -c /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1356): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1357): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1643): error: identifier "cublasGetStatusString" is undefined
      
      /var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu(1644): error: identifier "cublasGetStatusString" is undefined
      
      4 errors detected in the compilation of "/var/tmp/pip-install-hr0ujag3/llama-cpp-python_a0f7fbe92964488181f177fb5a871b7b/vendor/llama.cpp/ggml-cuda.cu".

Based on this:

ggml-org/llama.cpp#1778 (comment)

I would take a look at your cuda version and ensure it's up-to-date.

@ncfx can you double-check your logs and see if you have the same issue? If not, please open a new issue so we can help.

Cheers.

@jaymon0703
Copy link
Author

jaymon0703 commented Jun 18, 2023

According to ggml-org/llama.cpp#1778 upgrading to 11.4.2+ should resolve the issue. I am on 11.6.

image

@jaymon0703
Copy link
Author

Ok, toolkit version is older (11.3.109)...let me try update and revert

-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 11.3.109

@jaymon0703
Copy link
Author

Thank you. I can now compile llama-cpp-python v0.1.64 but get this error still...

image

I thought it was a problem because i had older versions of llama-cpp-python and could not compile more recent versions...

Shall i close this and create a new issue?

Thanks for your help.

@gjmulder
Copy link
Contributor

Yes, please open a new issue for a new issue.

@timokinyanjui
Copy link

Ok, toolkit version is older (11.3.109)...let me try update and revert

-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109") -- cuBLAS found -- The CUDA compiler identification is NVIDIA 11.3.109

Hi @jaymon0703, did you manage to compile llama-cpp-python by only updating the toolkit version? Which version did you use? I am experiencing the same issue as you originally had and I have not managed to compile it successfully.

@jaymon0703
Copy link
Author

Sorry for late reply. Cuda version 12.1. Driver Version: 530.30.02. It is working now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working build
Projects
None yet
Development

No branches or pull requests

5 participants