Skip to content

Build talk-llama error: no member named 'n_gpu_layers' in 'llama_context_params' #1436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mattlinares opened this issue Nov 6, 2023 · 0 comments · Fixed by #1441
Closed
Labels
bug Something isn't working

Comments

@mattlinares
Copy link

Hi all, following build instructions for my MacBookPro M2 here: https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk-llama

and getting the error no member named 'n_gpu_layers' in 'llama_context_params'.

Any ideas? Thanks

technical@Matts-MacBook-Pro ~/c/whisper.cpp (master)> make talk-llama                       (base)
I whisper.cpp build info:
I UNAME_S:  Darwin
I UNAME_P:  arm
I UNAME_M:  arm64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
I CC:       Apple clang version 15.0.0 (clang-1500.0.40.1)
I CXX:      Apple clang version 15.0.0 (clang-1500.0.40.1)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml.c -o ggml.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-alloc.c -o ggml-alloc.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-backend.c -o ggml-backend.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-quants.c -o ggml-quants.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL -c whisper.cpp -o whisper.o
cc -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL -c ggml-metal.m -o ggml-metal.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL examples/talk-llama/talk-llama.cpp examples/talk-llama/llama.cpp examples/common.cpp examples/common-ggml.cpp examples/common-sdl.cpp ggml.o ggml-alloc.o ggml-backend.o ggml-quants.o whisper.o ggml-metal.o -o talk-llama `sdl2-config --cflags --libs`  -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
examples/talk-llama/talk-llama.cpp:280:18: error: no member named 'n_gpu_layers' in 'llama_context_params'
        lcparams.n_gpu_layers = 0;
        ~~~~~~~~ ^
examples/talk-llama/talk-llama.cpp:401:9: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
    if (llama_eval(ctx_llama, embd_inp.data(), embd_inp.size(), 0)) {
        ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
examples/talk-llama/talk-llama.cpp:584:29: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
                        if (llama_eval(ctx_llama, embd.data(), embd.size(), n_past)) {
                            ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
2 warnings and 1 error generated.
make: *** [talk-llama] Error 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants