Skip to content

Low prompt processing speed with mixtral? #6740

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
LiquidGunay opened this issue Apr 18, 2024 · 7 comments
Closed

Low prompt processing speed with mixtral? #6740

LiquidGunay opened this issue Apr 18, 2024 · 7 comments

Comments

@LiquidGunay
Copy link

I am Running WizardLM2-8x22B IQ4_XS on an AWS g5.12xlarge (split across 4 A10s). I haven't run a model of this size before but I am getting around 95t/s prompt processing and 14t/s generation (fully offloaded to the GPU). What I noticed is that the ratio of prompt processing speed to prompt generation speed is much lower for this model compared to running smaller models. Can anyone explain why this is the case? Any suggestions for being able to run the model on this system faster without too much quality loss? Thanks.

@stefanvarunix
Copy link

stefanvarunix commented Apr 19, 2024

Same error here. GPU seems underutillized with Mixtral 8x22b:

Hardware
Apple M1 Ultra, 64GPU, 128 GB unified RAM, thereof 118784 MB for GPU.

Model
Mixtral-8x22B-Instruct-v0.1-GGUF, Q6.
Source: Huggingface MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF/Mixtral-8x22B-Instruct-v0.1.Q6-00001-of-00004.gguf

Sample chat
55 predicted, 86 cached, 3828ms per token, 0.26 tokens per second. Very very slow. Way slower than any other model (whereas Mixtral 8x7B is really fast).

VRAM seems fine (ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 112.00 MiB, (110140.84 / 118784.00)

Tested via
./server -m ./mixtral8x22/Mixtral-8x22B-Instruct-v0.1.Q6-00001-of-00004.gguf --host 0.0.0.0
and using browser chat.

Asitop
Asitop shows: GPU Usage is very low. Max 26%, rather around 10% and below. P-CPU Usage: 24%. Neither GPU nor CPU are busy inferencing...
Usually (with e.g. Mixtral 8x7b) GPU Usage goes up to 100%.
Thus, GPU underutilized.

edit:
same issue when using ./main

@ggerganov
Copy link
Member

ggerganov commented Apr 19, 2024

Are you using latest llama.cpp? There was a MoE-related update yesterday #6505

@stefanvarunix Add -ngl 99 to your server command

@stefanvarunix
Copy link

stefanvarunix commented Apr 19, 2024

Thanks!

I cloned llama.cpp this morning. Maybe too early.

Re-Test:
Cloned llama-cpp 2024-04-19 12:29 CEST

git show --summary
commit 9958c81b798a5872087b30b360e4674871f2479e (HEAD -> master, origin/master, origin/HEAD)
Author: nopperl <[email protected]>
Date:   Fri Apr 19 09:35:54 2024 +0000

./server -m ./mixtral8x22/Mixtral-8x22B-Instruct-v0.1.Q6-00001-of-00004.gguf --host 0.0.0.0 -ngl 99

Open browser, chat "Hi":
23 predicted, 54 cached, 3695ms per token, 0.27 tokens per second
Still very slow. No change.
Very low GPU utilization as described above.

llama.cpp server output:


{"tid":"0x1dee31c40","timestamp":1713522662,"level":"INFO","function":"main","line":2924,"msg":"build info","build":2697,"commit":"9958c81b"}
{"tid":"0x1dee31c40","timestamp":1713522662,"level":"INFO","function":"main","line":2931,"msg":"system info","n_threads":16,"n_threads_batch":-1,"total_threads":20,"system_info":"AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | "}
llama_model_loader: additional 3 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 29 key-value pairs and 563 tensors from /path/mixtral8x22/Mixtral-8x22B-Instruct-v0.1.Q6-00001-of-00004.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models--mistralai--Mixtral-8x22B-Inst...
llama_model_loader: - kv   2:                          llama.block_count u32              = 56
llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 16384
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 48
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  12:                          general.file_type u32              = 18
llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  14:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32768]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32768]   = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{bos_token}}{% for message in messag...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                                   split.no u16              = 0
llama_model_loader: - kv  27:                                split.count u16              = 4
llama_model_loader: - kv  28:                        split.tensors.count i32              = 563
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   56 tensors
llama_model_loader: - type q8_0:  112 tensors
llama_model_loader: - type q6_K:  282 tensors
llm_load_vocab: mismatch in special tokens definition ( 1027/32768 vs 259/32768 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32768
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 65536
llm_load_print_meta: n_embd           = 6144
llm_load_print_meta: n_head           = 48
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 56
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 16384
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 65536
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x22B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 140.63 B
llm_load_print_meta: model size       = 107.60 GiB (6.57 BPW) 
llm_load_print_meta: general.name     = models--mistralai--Mixtral-8x22B-Instruct-v0.1
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 781 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.56 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 29429.30 MiB, (29429.36 / 118784.00)
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 29429.31 MiB, (58858.67 / 118784.00)
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 29429.31 MiB, (88287.98 / 118784.00)
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 21739.05 MiB, (110027.03 / 118784.00)
llm_load_tensors: offloading 56 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 57/57 layers to GPU
llm_load_tensors:      Metal buffer size = 29429.29 MiB
llm_load_tensors:      Metal buffer size = 29429.31 MiB
llm_load_tensors:      Metal buffer size = 29429.31 MiB
llm_load_tensors:      Metal buffer size = 21739.04 MiB
llm_load_tensors:        CPU buffer size =   157.50 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Ultra
ggml_metal_init: picking default device: Apple M1 Ultra
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/path/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   Apple M1 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 124554.05 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   112.00 MiB, (110140.84 / 118784.00)
llama_kv_cache_init:      Metal KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  112.00 MiB, K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.25 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   165.02 MiB, (110305.86 / 118784.00)
llama_new_context_with_model:      Metal compute buffer size =   165.00 MiB
llama_new_context_with_model:        CPU compute buffer size =    13.01 MiB
llama_new_context_with_model: graph nodes  = 2638
llama_new_context_with_model: graph splits = 2
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"init","line":708,"msg":"initializing slots","n_slots":1}
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"init","line":720,"msg":"new slot","id_slot":0,"n_ctx_slot":512}
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"main","line":3021,"msg":"model loaded"}
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"main","line":3046,"msg":"chat template","chat_example":"[INST] You are a helpful assistant\nHello [/INST] Hi there </s>[INST] How are you? [/INST]","built_in":true}
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"main","line":3774,"msg":"HTTP server listening","port":"8080","n_threads_http":"19","hostname":"0.0.0.0"}
{"tid":"0x1dee31c40","timestamp":1713522666,"level":"INFO","function":"update_slots","line":1786,"msg":"all slots are idle"}
{"tid":"0x1702e7000","timestamp":1713522694,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58314,"status":200,"method":"GET","path":"/","params":{}}
{"tid":"0x1702e7000","timestamp":1713522694,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58314,"status":200,"method":"GET","path":"/index.js","params":{}}
{"tid":"0x170373000","timestamp":1713522694,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58315,"status":200,"method":"GET","path":"/completion.js","params":{}}
{"tid":"0x1703ff000","timestamp":1713522694,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58316,"status":200,"method":"GET","path":"/json-schema-to-grammar.mjs","params":{}}
{"tid":"0x1702e7000","timestamp":1713522694,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58314,"status":404,"method":"GET","path":"/favicon.ico","params":{}}
{"tid":"0x1dee31c40","timestamp":1713522698,"level":"INFO","function":"launch_slot_with_task","line":1040,"msg":"slot is processing task","id_slot":0,"id_task":0}
{"tid":"0x1dee31c40","timestamp":1713522698,"level":"INFO","function":"update_slots","line":2070,"msg":"kv cache rm [p0, end)","id_slot":0,"id_task":0,"p0":0}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"print_timings","line":320,"msg":"prompt eval time     =    5829.39 ms /    32 tokens (  182.17 ms per token,     5.49 tokens per second)","id_slot":0,"id_task":0,"t_prompt_processing":5829.391,"n_prompt_tokens_processed":32,"t_token":182.16846875,"n_tokens_second":5.489424195426246}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"print_timings","line":336,"msg":"generation eval time =   84991.72 ms /    23 runs   ( 3695.29 ms per token,     0.27 tokens per second)","id_slot":0,"id_task":0,"t_token_generation":84991.722,"n_decoded":23,"t_token":3695.292260869565,"n_tokens_second":0.2706145899714798}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"print_timings","line":346,"msg":"          total time =   90821.11 ms","id_slot":0,"id_task":0,"t_prompt_processing":5829.391,"t_token_generation":84991.722,"t_total":90821.113}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"update_slots","line":1768,"msg":"slot released","id_slot":0,"id_task":0,"n_ctx":512,"n_past":54,"n_system_tokens":0,"n_cache_tokens":54,"truncated":false}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"update_slots","line":1786,"msg":"all slots are idle"}
{"tid":"0x1702e7000","timestamp":1713522789,"level":"INFO","function":"log_server_request","line":2875,"msg":"request","remote_addr":"192.168.2.128","remote_port":58314,"status":200,"method":"POST","path":"/completion","params":{}}
{"tid":"0x1dee31c40","timestamp":1713522789,"level":"INFO","function":"update_slots","line":1786,"msg":"all slots are idle"}

@ggerganov
Copy link
Member

@stefanvarunix Ah, I just realized this is M1 Ultra. I think this is running very close to the limit - how did you even get to use 120 GB out of the total 128 GB? Normally, Mac OS will limit it you to about 66% of the total RAM

I'll downloaded the model and try it on my M2 Ultra to see how it behaves. But will probably take me some time

@stefanvarunix
Copy link

stefanvarunix commented Apr 19, 2024

Ah, I just realized this is M1 Ultra. I think this is running very close to the limit

I am not sure. If it would be running close to the limit, I would expect GPU (or at least CPU) to be fully utilized. But: GPU utilization is very low, 26% at peak, but just for some seconds or so. Most of the time during inferencing Mixtral8x22b below 10%. This is very unusual, as GPU utilization usually goes up to 100% during inferencing (e.g. with Mixtral 8x7b).

But maybe I push the VRAM too much to the limit. (via sudo sysctl iogpu.wired_limit_mb=118784).
I'll download the 4 bit quants (should easily fit into default VRAM) and re-test. And come back here.

@LiquidGunay
Copy link
Author

I was on the build right before #6505 was merged. After trying it out on the latest build I am getting an over 2x improvement on prompt processing speed. Thanks.

@stefanvarunix
Copy link

I already used the recent build. Did not work.
I got a solution which is 50x faster.

I merged all model files with ./gguf-split --merge and started inferencing on the big file.
That worked with 12.95 tokens per second. As aspected.

Performance in detail: 172 predicted, 459 cached, 77ms per token, 12.95 tokens per second

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants