Skip to content

Bug: Can't quantize 405B Mega merge #8528

Closed
@bartowski1182

Description

@bartowski1182

What happened?

Trying to quantize https://huggingface.co/TensorWave/Meta-Llama-3-405B-Instruct-Up-Merge

I was able to convert without issue, but when trying to quantize I get an annoyingly generic assert:

GGML_ASSERT: src/llama.cpp:3973: n <= N_MAX

Anything I can do to get more useful outputs or debugging?

Name and Version

b3389

What operating system are you seeing the problem on?

No response

Relevant log output

main: build = 3389 (73cf442e)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/models_out/Meta-Llama-3-405B-Instruct-Up-Merge-GGUF/Meta-Llama-3-405B-Instruct-Up-Merge-f16.gguf' to '/models_out/Meta-Llama-3-405B-Instruct-Up-Merge-GGUF/Meta-Llama-3-405B-Instruct-Up-Merge-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 22 key-value pairs and 4242 tensors from /models_out/Meta-Llama-3-405B-Instruct-Up-Merge-GGUF/Meta-Llama-3-405B-Instruct-Up-Merge-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-405B-Instruct-Up-Merge
llama_model_loader: - kv   2:                          llama.block_count u32              = 471
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  943 tensors
llama_model_loader: - type  f16: 3299 tensors
GGML_ASSERT: src/llama.cpp:3973: n <= N_MAX

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug-unconfirmedlow severityUsed to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions