-
Notifications
You must be signed in to change notification settings - Fork 12k
when compiling, the following error occurred: 'identifier cublasGetStatusString is undefined' #1778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
同样的问题,老哥解决了吗? |
|
same issue here. |
I just solved this by upgrading cuda to 12.1 |
Is it necessary for the CUDA version of Docker to match the CUDA version of nvidia-smi ? Or is it sufficient to only upgrade the CUDA version of Docker ? |
Same issue; upgrading to 12.1 did not appear to fix it |
make CUDART_VERSION=11, may be solve it. |
I tried it, but it didn't work. The problem still exists. |
have you linked cuda`s dynamic library? |
How to determine if the CUDA dynamic library is linked? |
Add me on WeChat |
Just chiming into to say that I am having the same issue. Here is my log if it is of helpI llama.cpp build info: rm -vf *.o main quantize quantize-stats perplexity embedding benchmark-matmult save-load-state server vdot build-info.h cc -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -c ggml.c -o ggml.o ggml-cuda.cu(1357): error: identifier "cublasGetStatusString" is undefined ggml-cuda.cu(1643): error: identifier "cublasGetStatusString" is undefined ggml-cuda.cu(1644): error: identifier "cublasGetStatusString" is undefined 4 errors detected in the compilation of "ggml-cuda.cu". |
I can compile the code on Windows, but when running from server.exe I get a related "The procedure entry point cublasGetStatusString could not be located in dynamic link library llama.cpp\build\bin\Release\server.exe". Seems the issue was introduced with this commit. To get around it, I just delete the fprintf statement in line 32 of ggml-cuda.cu. |
|
I found this one, because the include folder may wrong. If upgrade to cuda 12 so that the include folder should change to the same version eg: this one related to include folder for cmake cmake .. -DLLAMA_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12/bin/nvcc -DCUDAToolkit_ROOT=/usr/local/cuda-12 |
I encountered an error while executing the command
cd llama.cpp && mkdir build && cd build && cmake .. -DLLAMA_CUBLAS=ON && cmake --build . --config Release
in a Docker environment. Below is the log information, including the CUDA version. I'm not sure what the reason is. What should I do?nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0
Fri Jun 9 13:16:39 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000000:C1:00.0 Off | 0 |
| N/A 26C P0 23W / 250W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
-- The C compiler identification is GNU 8.4.0
-- The CXX compiler identification is GNU 8.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.17.1")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.2.152")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 11.2.152
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- GGML CUDA sources found, configuring CUDA architecture
-- Configuring done
-- Generating done
-- Build files have been written to: /workspace/llama.cpp/build
[ 2%] Built target BUILD_INFO
[ 5%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 8%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
/workspace/llama.cpp/ggml-cuda.cu(1068): error: identifier "cublasGetStatusString" is undefined
/workspace/llama.cpp/ggml-cuda.cu(1069): error: identifier "cublasGetStatusString" is undefined
/workspace/llama.cpp/ggml-cuda.cu(1337): error: identifier "cublasGetStatusString" is undefined
/workspace/llama.cpp/ggml-cuda.cu(1338): error: identifier "cublasGetStatusString" is undefined
4 errors detected in the compilation of "/workspace/llama.cpp/ggml-cuda.cu".
make[2]: *** [CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
CMakeFiles/ggml.dir/build.make:89: recipe for target 'CMakeFiles/ggml.dir/ggml-cuda.cu.o' failed
CMakeFiles/Makefile2:359: recipe for target 'CMakeFiles/ggml.dir/all' failed
make[1]: *** [CMakeFiles/ggml.dir/all] Error 2
Makefile:100: recipe for target 'all' failed
make: *** [all] Error 2
The text was updated successfully, but these errors were encountered: