Skip to content

Wrong switches: unreachable-code-break and unreachable-code-return, at least on Termux #1494

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Manamama opened this issue May 29, 2024 · 0 comments

Comments

@Manamama
Copy link

Manamama commented May 29, 2024

Prerequisites

Yes ;)

Current Behavior

FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
ccache /usr/bin/clang -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_LLAMAFILE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -I/root/downloads_Termux/llama-cpp-python/vendor/llama.cpp/. -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -std=gnu11 -MD -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o -c /root/downloads_Termux/llama-cpp-python/vendor/llama.cpp/ggml.c
clang: error: unrecognized command-line option ‘-Wunreachable-code-break’; did you mean ‘-Wunreachable-code’?
clang: error: unrecognized command-line option ‘-Wunreachable-code-return’; did you mean ‘-Wunreachable-code’?

etc.

Related to this probably.

Environment and Context

Termux, prooted Debian, but also in standard Termux.

Box specs, in short:

Environment at local 🎋 prooted system:
Linux localhost 6.2.1-PRoot-Distro #1 SMP PREEMPT Thu Mar 17 16:28:22 CST 2022 aarch64 GNU/Linux
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games:/system/bin:/system/xbin:/data/data/com.termux/files/home/.local/bin/:/data/data/com.termux/files/usr/bin/
LD_LIBRARY_PATH:
CFLAGS:
LDFLAGS:
CPPFLAGS:
C_INCLUDE_PATH:
CPLUS_INCLUDE_PATH:
USE_VULKAN: OFF

Running command Building wheel for llama_cpp_python (pyproject.toml)
*** scikit-build-core 0.9.4 using CMake 3.29.3 (wheel)
*** Configuring CMake...
loading initial cache file /data/data/com.termux/files/usr/tmp/tmpbfzw_fsw/build/CMakeInit.txt
-- The C compiler identification is Clang 18.1.6
-- The CXX compiler identification is Clang 18.1.6
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /data/data/com.termux/files/usr/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /data/data/com.termux/files/usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /data/data/com.termux/files/usr/bin/git (found version "2.45.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- ccache found, compilation results will be cached. Disable with LLAMA_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed

Steps to fix

Do this:

sed -i 's/-Wunreachable-code-break//g; s/-Wunreachable-code-return//g' vendor/llama.cpp/CMakeLists.txt

to arrive at:

[27/27] : && /usr/bin/clang++ -O3 -DNDEBUG vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llava-cli -Wl,-rpath,/tmp/tmpqgrx623c/build/vendor/llama.cpp: vendor/llama.cpp/common/libcommon.a vendor/llama.cpp/libllama.so && :
*** Installing project into wheel...
-- Install configuration: "Release"
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/lib/libggml_shared.so
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/lib/cmake/Llama/LlamaConfig.cmake
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/lib/cmake/Llama/LlamaConfigVersion.cmake
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/include/ggml.h
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/include/ggml-alloc.h
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/include/ggml-backend.h
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/lib/libllama.so
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/include/llama.h
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/bin/convert.py
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/llama_cpp/libllama.so
-- Installing: /root/downloads_Termux/llama-cpp-python/llama_cpp/libllama.so
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/lib/libllava.so
-- Set runtime path of "/tmp/tmpqgrx623c/wheel/platlib/lib/libllava.so" to ""
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/bin/llava-cli
-- Set runtime path of "/tmp/tmpqgrx623c/wheel/platlib/bin/llava-cli" to ""
-- Installing: /tmp/tmpqgrx623c/wheel/platlib/llama_cpp/libllava.so
-- Set runtime path of "/tmp/tmpqgrx623c/wheel/platlib/llama_cpp/libllava.so" to ""
-- Installing: /root/downloads_Termux/llama-cpp-python/llama_cpp/libllava.so
-- Set runtime path of "/root/downloads_Termux/llama-cpp-python/llama_cpp/libllava.so" to ""
*** Making wheel...
*** Created llama_cpp_python-0.2.76-cp311-cp311-linux_aarch64.whl...
Building wheel for llama_cpp_python (pyproject.toml) ... done
Created wheel for llama_cpp_python: filename=llama_cpp_python-0.2.76-cp311-cp311-linux_aarch64.whl size=3334239 sha256=da0322b5d7a676e2d0bab3e2d6cc65f888937bcc676f854e72a378fb574438f8
Stored in directory: /root/.cache/pip/wheels/78/ec/ae/e102e407170d9e380949aaa8aafc0646a62be8208c8008f3db
Successfully built llama_cpp_python
Installing collected packages: llama_cpp_python
Successfully installed llama_cpp_python-0.2.76

and

root@localhost:~/downloads_Termux/llama-cpp-python# pip show llama-cpp-python
Name: llama_cpp_python
Version: 0.2.76
Summary: Python bindings for the llama.cpp library
Home-page:
Author:
Author-email: Andrei Betlen [email protected]
License: MIT
Location: /usr/local/lib/python3.11/dist-packages
Requires: diskcache, jinja2, numpy, typing-extensions
Required-by:

etc.

@Manamama Manamama changed the title Wrong unreachable-code-break switch Wrong switches: unreachable-code-break and unreachable-code-return, at least on Termux May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant