Closed
Description
Name and Version
$ ./llama-cli --version
register_backend: registered backend CPU (1 devices)
register_device: registered device CPU (Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz)
load_backend: failed to find ggml_backend_init in /home/nick/Downloads/llama.cpp/build/bin/libggml-cpu.so
version: 5015 (59f596e)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
$ ./llama-gguf-split --merge --dry-run /data/models/DeepSeek-R1-Q8_0/DeepSeek-R1-Q8_0/DeepSeek-R1.Q8_0-00001-of-00015.gguf /data/models/DeepSeek-R1-Q8_0/DeepSeek-R1-Q8_0/DeepSeek-R1-merge.gguf
Problem description & steps to reproduce
examples/gguf-split respects "dry-run" option for operation --split, but for --merge, dry-run is ignored.
First Bad Commit
No response