Skip to content

Misc. bug: struct.error during GGUF conversion of Mistral-Instruct with convert_hf_to_gguf.py #14243

Closed
@christinajoslin

Description

@christinajoslin

Name and Version

./llama-cli --version
version: 2999 (42b4109e)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Python/Bash scripts

Command line

!python3 convert_hf_to_gguf.py \
  "/path/to/fine-tuned-mistral" \
  --outfile "/path/to/output/mistral-q8_0.gguf" \
  --outtype q8_0

Problem description & steps to reproduce

Model: mistral-7b-instruct-v0.3 (with LoRA fine-tuning using 8-bit weights and 8-byte quantization)

I'm encountering a repeatable bug while trying to convert this model to GGUF format using convert_hf_to_gguf.py from llama.cpp. The model was fine-tuned using LoRA, and I attempted conversion using several methods:

In every case, the process fails with: struct.error: required argument is not an integer.

This happens specifically during write_kv_data_to_file() when GGUF metadata is being written. I even get the same error when attempting to convert base Mistral models, so the problem may not be specific to LoRA or my training setup.

First Bad Commit

Not sure - I encountered this with the current main branch.

Relevant log output

7B-Instruct-v0.3.fp16.gguf: n_tensors = 291, total_size = 14.5G
Traceback (most recent call last):
  File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 6533, in <module>
    main()
  File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 6527, in main
    model_instance.write()
  File "/home/user/app/./llama.cpp/convert_hf_to_gguf.py", line 406, in write
    self.gguf_writer.write_kv_data_to_file()
  File "/home/user/app/llama.cpp/gguf-py/gguf/gguf_writer.py", line 242, in write_kv_data_to_file
    kv_bytes += self._pack_val(val.value, val.type, add_vtype=True, sub_type=val.sub_type)
  File "/home/user/app/llama.cpp/gguf-py/gguf/gguf_writer.py", line 1034, in _pack_val
    kv_data += self._pack(pack_fmt, val, skip_pack_prefix = vtype == GGUFValueType.BOOL)
  File "/home/user/app/llama.cpp/gguf-py/gguf/gguf_writer.py", line 1024, in _pack
    return struct.pack(f'{pack_prefix}{fmt}', value)
struct.error: required argument is not an integer

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions