Skip to content

After b1177 it failed load gguf model on Intel macOS 13.5.1 #macos #3092

@mayulu

Description

@mayulu

Hi,
From couple days(maybe 7~8) ago, latest build kept failed loading gguf models.
Before that it worked very well.
I followed instructions like https://github.com/ggerganov/llama.cpp/issues/1730#issuecomment-1636055251, no help.
when run main, the error meaage:

$bin git:(master) ./main -m /Users/mayulu/Documents/llama-2-7b.ggmlv3.q4_K_M.gguf -p "Hello"

Log start
main: build = 1207 (ec2a24f)
main: seed  = 1694227613
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /Users/mayulu/Documents/llama-2-7b.ggmlv3.q4_K_M.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_K     [  4096, 32000,     1,     1 ]

.......

ggml_metal_init: loaded kernel_mul_mat_q5_K_f32            0x7fcdf8c1ba90 | th_max =  512 | th_width =   32
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32            0x7fce09106b10 | th_max = 1024 | th_width =   32
ggml_metal_init: loaded kernel_mul_mm_f16_f32                         0x0 | th_max =    0 | th_width =    0
ggml_metal_init: load pipeline error: Error Domain=CompilerError Code=2 "SC compilation failure
There is a call to an undefined label" UserInfo={NSLocalizedDescription=SC compilation failure
There is a call to an undefined label}
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model '/Users/mayulu/Documents/GPT4ALL/llama-2-7b.ggmlv3.q4_K_M.gguf'
main: error: unable to load model

ENV: Intel macOS 13.5.1, llama-cpp-python 0.1.83.

would you give some clue? @ggerganov Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions