Skip to content

Architecture "LlamaForCausalLM" not supported #5142

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dfengpo opened this issue Jan 26, 2024 · 18 comments
Closed

Architecture "LlamaForCausalLM" not supported #5142

dfengpo opened this issue Jan 26, 2024 · 18 comments

Comments

@dfengpo
Copy link

dfengpo commented Jan 26, 2024

I use python convert-hf-to-gguf.py /fengpo/github/Yi-34B-Chat-8bits.
I get this error:

File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 1335, in main
model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 57, in init
self.model_arch = self._get_model_architecture()
File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 254, in _get_model_architecture
raise NotImplementedError(f'Architecture "{arch}" not supported!')
NotImplementedError: Architecture "LlamaForCausalLM" not supported!.

This also contain LlamaForCausalLM.
image

@calebheinzman
Copy link

calebheinzman commented Jan 26, 2024

I'm using it for AutoModelForCausalLM and AutoTokenizer and I'm getting an error as well since auto tokenizer doesn't create a vocab.json file.

!python llama.cpp/convert.py testing \
  --outfile testing.gguf \
  --outtype q8_0

Here's the error:

Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00001-of-00002.bin
Loading model file testing/pytorch_model-00002-of-00002.bin
params = Params(n_vocab=32000, n_embd=4096, n_layer=32, n_ctx=4096, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyQ8_0: 7>, path_model=PosixPath('testing'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': None, 'tokenizer.json': PosixPath('testing/tokenizer.json')}
Loading vocab file 'testing/tokenizer.json', type 'spm'
Traceback (most recent call last):
  File "/content/llama.cpp/convert.py", line 1471, in <module>
    main()
  File "/content/llama.cpp/convert.py", line 1439, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
  File "/content/llama.cpp/convert.py", line 1325, in load_vocab
    vocab = SentencePieceVocab(
  File "/content/llama.cpp/convert.py", line 391, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]

@calebheinzman
Copy link

Update I am now running:

!python llama.cpp/convert-hf-to-gguf.py testing \
  --outfile testing.gguf 

And I am getting the same error as @lmxin123

Loading model: testing
Traceback (most recent call last):
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 1033, in <module>
    model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 48, in __init__
    self.model_arch = self._get_model_architecture()
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 225, in _get_model_architecture
    raise NotImplementedError(f'Architecture "{arch}" not supported!')
NotImplementedError: Architecture "LlamaForCausalLM" not supported!

@slaren is this a real bug or am I just stupid haha

@ptsochantaris
Copy link
Collaborator

I haven't tried converting many models, but coincidentally I tried it yesterday and got this error as well. To check if it is a regression I checked out commits from this repo going back a week or two, but the error was the same, which makes me think (a) I may be doing something stupid (likely) or (b) some support module may have changed something, breaking the convert-hf-to-gguf.py script? Only guessing though.

@Galunid
Copy link
Collaborator

Galunid commented Jan 27, 2024

convert-hf-to-gguf.py never supported llama based models. Please use convert.py.

@ptsochantaris
Copy link
Collaborator

I initially tried converting using convert.py and it broke, and searching I got the impression that the advice was to use this script, thanks for clearing that up! If it helps, the error I get from convert.py is:

% ./convert.py ~/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1 --outtype f32 --outfile /Volumes/Rabbit/moreh--MoMo-72B-lora-1.8.7-DPO-f32.bin
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00001-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00001-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00002-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00003-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00004-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00005-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00006-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00007-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00008-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00009-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00010-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00011-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00012-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00013-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00014-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00015-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00016-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00017-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00018-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00019-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00020-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00021-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00022-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00023-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00024-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00025-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00026-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00027-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00028-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00029-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00030-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00031-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00032-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00033-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00034-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00035-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00036-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00037-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00038-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00039-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00040-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00041-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00042-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00043-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00044-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00045-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00046-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00047-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00048-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00049-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00050-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00051-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00052-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00053-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00054-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00055-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00056-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00057-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00058-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00059-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00060-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00061-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00062-of-00063.safetensors
Loading model file /Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/model-00063-of-00063.safetensors
params = Params(n_vocab=152064, n_embd=8192, n_layer=80, n_ctx=32768, n_ff=24576, n_head=64, n_head_kv=64, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.AllF32: 0>, path_model=PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/vocab.json'), 'tokenizer.json': PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/tokenizer.json')}
Loading vocab file '/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/vocab.json', type 'spm'
Traceback (most recent call last):
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1471, in <module>
    main()
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1439, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1325, in load_vocab
    vocab = SentencePieceVocab(
            ^^^^^^^^^^^^^^^^^^^
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 391, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Internal: /Users/runner/work/sentencepiece/sentencepiece/src/sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

However I still don't preclude the possibility that this is me doing something silly :)

@jukofyork
Copy link
Collaborator

jukofyork commented Jan 29, 2024

I initially tried converting using convert.py and it broke, and searching I got the impression that the advice was to use this script, thanks for clearing that up! If it helps, the error I get from convert.py is:

% ./convert.py ~/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1 --outtype f32 --outfile /Volumes/Rabbit/moreh--MoMo-72B-lora-1.8.7-DPO-f32.bin
...
params = Params(n_vocab=152064, n_embd=8192, n_layer=80, n_ctx=32768, n_ff=24576, n_head=64, n_head_kv=64, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.AllF32: 0>, path_model=PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/vocab.json'), 'tokenizer.json': PosixPath('/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/tokenizer.json')}
Loading vocab file '/Users/ptsochantaris/.cache/huggingface/hub/models--moreh--MoMo-72B-lora-1.8.7-DPO/snapshots/8a538ac4ac7b5b489e9062465dad38c5d4992fa1/vocab.json', type 'spm'
Traceback (most recent call last):
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1471, in <module>
    main()
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1439, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 1325, in load_vocab
    vocab = SentencePieceVocab(
            ^^^^^^^^^^^^^^^^^^^
  File "/Users/ptsochantaris/llama.cpp/./convert.py", line 391, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/envs/coremltools-env/lib/python3.11/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Internal: /Users/runner/work/sentencepiece/sentencepiece/src/sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

However I still don't preclude the possibility that this is me doing something silly :)

I've just spent a couple of hours trying to work out what this was as I had an old PR of llama.cpp from deekseek that actually worked fine so I knew it must be possible... Turns out there used to be a test in convert.py to check if the tokenizer.model was there but it got removed:

        path_candidate = find_vocab_file_path(self.fname_tokenizer, vocab_file)
        if path_candidate is not None:
            self.spm = SentencePieceProcessor(str(path_candidate))
            print(self.spm.vocab_size(), self.vocab_size_base)
        else:
            self.spm = None

When this option got added:

--vocab-type {spm,bpe,hfft}
The vocabulary format used to define the tokenizer model (default: spm)

See if adding --vocab-type hfft fixes it for you?

@ptsochantaris
Copy link
Collaborator

ptsochantaris commented Jan 29, 2024

Yes indeed, --vocab-type hfft seemed to do the trick (however --pad-vocab was also required, which I strongly suspect is because of the model data itself and not related to this issue)

@jukofyork
Copy link
Collaborator

Yes indeed, --vocab-type hfft seemed to do the trick (however --pad-vocab was also required, which I strongly suspect is because of the model data itself and not related to this issue)

Yeah, I had to add that too.

I've still not fixed my problems fully though: managed to make the GGUF but it crashes with an unordered_map::at() exception which I think must be missing token(s), but at least I understand what that means rather than RuntimeError: Internal: 😁

@ptsochantaris
Copy link
Collaborator

@jukofyork Yes, same issue here as what you're seeing as well. No clue if that's a model problem or a conversion problem, but I guess the main issue in any case would be the vocab type missing code that you mentioned above.

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 30, 2024

The 8-bit version of the model is a GPTQ quant, while the 4-bit version is is a AWQ qaunt [1, 2]. For reference, you can find more information on these quantized models in the Yi-34B-Chat repository [3].

I recommend trying out the 01-ai/Yi-34B-Chat model, which utilizes the LlamaForCausalLM and can be converted using convert.py. It seems that the convert.py script already supports AWQ [4, 5]. However, GPTQ support is not currently available, which might be causing the unordered_map::at() crash due to a mismatch in tensor interpretation issue.

I plan on testing the original Yi-34B-Chat model and will share my findings.

@jukofyork
Copy link
Collaborator

I think my problem was just forgetting to use the -pad-vocab option as seems to be getting further now.

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 30, 2024

python convert.py local/models/01-ai/Yi-34B-Chat/
00:08:21 | ~
  λ /mnt/valerie/llama.cpp
00:08:25 | /mnt/valerie/llama.cpp
 git:(master | Δ) λ source .venv/bin/activate                       
00:08:29 | /mnt/valerie/llama.cpp
(.venv) git:(master | Δ) λ python convert.py local/models/01-ai/Yi-34B-Chat/
Loading model file local/models/01-ai/Yi-34B-Chat/model-00001-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00001-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00002-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00003-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00004-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00005-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00006-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00007-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00008-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00009-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00010-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00011-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00012-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00013-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00014-of-00015.safetensors
Loading model file local/models/01-ai/Yi-34B-Chat/model-00015-of-00015.safetensors
params = Params(n_vocab=64000, n_embd=7168, n_layer=60, n_ctx=4096, n_ff=20480, n_head=56, n_head_kv=8, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=5000000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('local/models/01-ai/Yi-34B-Chat'))
Found vocab files: {'tokenizer.model': PosixPath('local/models/01-ai/Yi-34B-Chat/tokenizer.model'), 'vocab.json': None, 'tokenizer.json': None}
Loading vocab file 'local/models/01-ai/Yi-34B-Chat/tokenizer.model', type 'spm'
Vocab info: <SentencePieceVocab with 64000 base tokens and 0 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens {'bos': 1, 'eos': 2}, add special tokens {'bos': False, 'eos': False}>
Permuting layer 0
# omitting for brevity
Permuting layer 59
model.embed_tokens.weight                        -> token_embd.weight                        | BF16   | [64000, 7168]
model.layers.0.input_layernorm.weight            -> blk.0.attn_norm.weight                   | BF16   | [7168]
model.layers.0.mlp.down_proj.weight              -> blk.0.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.0.mlp.gate_proj.weight              -> blk.0.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.0.mlp.up_proj.weight                -> blk.0.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.0.post_attention_layernorm.weight   -> blk.0.ffn_norm.weight                    | BF16   | [7168]
model.layers.0.self_attn.k_proj.weight           -> blk.0.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.0.self_attn.o_proj.weight           -> blk.0.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.0.self_attn.q_proj.weight           -> blk.0.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.0.self_attn.v_proj.weight           -> blk.0.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.1.input_layernorm.weight            -> blk.1.attn_norm.weight                   | BF16   | [7168]
model.layers.1.mlp.down_proj.weight              -> blk.1.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.1.mlp.gate_proj.weight              -> blk.1.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.1.mlp.up_proj.weight                -> blk.1.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.1.post_attention_layernorm.weight   -> blk.1.ffn_norm.weight                    | BF16   | [7168]
model.layers.1.self_attn.k_proj.weight           -> blk.1.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.1.self_attn.o_proj.weight           -> blk.1.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.1.self_attn.q_proj.weight           -> blk.1.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.1.self_attn.v_proj.weight           -> blk.1.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.2.input_layernorm.weight            -> blk.2.attn_norm.weight                   | BF16   | [7168]
model.layers.2.mlp.down_proj.weight              -> blk.2.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.2.mlp.gate_proj.weight              -> blk.2.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.2.mlp.up_proj.weight                -> blk.2.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.2.post_attention_layernorm.weight   -> blk.2.ffn_norm.weight                    | BF16   | [7168]
model.layers.2.self_attn.k_proj.weight           -> blk.2.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.2.self_attn.o_proj.weight           -> blk.2.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.2.self_attn.q_proj.weight           -> blk.2.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.2.self_attn.v_proj.weight           -> blk.2.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.3.mlp.gate_proj.weight              -> blk.3.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.3.self_attn.k_proj.weight           -> blk.3.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.3.self_attn.o_proj.weight           -> blk.3.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.3.self_attn.q_proj.weight           -> blk.3.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.3.self_attn.v_proj.weight           -> blk.3.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.3.input_layernorm.weight            -> blk.3.attn_norm.weight                   | BF16   | [7168]
model.layers.3.mlp.down_proj.weight              -> blk.3.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.3.mlp.up_proj.weight                -> blk.3.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.3.post_attention_layernorm.weight   -> blk.3.ffn_norm.weight                    | BF16   | [7168]
model.layers.4.input_layernorm.weight            -> blk.4.attn_norm.weight                   | BF16   | [7168]
model.layers.4.mlp.down_proj.weight              -> blk.4.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.4.mlp.gate_proj.weight              -> blk.4.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.4.mlp.up_proj.weight                -> blk.4.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.4.post_attention_layernorm.weight   -> blk.4.ffn_norm.weight                    | BF16   | [7168]
model.layers.4.self_attn.k_proj.weight           -> blk.4.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.4.self_attn.o_proj.weight           -> blk.4.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.4.self_attn.q_proj.weight           -> blk.4.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.4.self_attn.v_proj.weight           -> blk.4.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.5.input_layernorm.weight            -> blk.5.attn_norm.weight                   | BF16   | [7168]
model.layers.5.mlp.down_proj.weight              -> blk.5.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.5.mlp.gate_proj.weight              -> blk.5.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.5.mlp.up_proj.weight                -> blk.5.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.5.post_attention_layernorm.weight   -> blk.5.ffn_norm.weight                    | BF16   | [7168]
model.layers.5.self_attn.k_proj.weight           -> blk.5.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.5.self_attn.o_proj.weight           -> blk.5.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.5.self_attn.q_proj.weight           -> blk.5.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.5.self_attn.v_proj.weight           -> blk.5.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.6.input_layernorm.weight            -> blk.6.attn_norm.weight                   | BF16   | [7168]
model.layers.6.mlp.down_proj.weight              -> blk.6.ffn_down.weight                    | BF16   | [7168, 20480]
model.layers.6.mlp.gate_proj.weight              -> blk.6.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.6.mlp.up_proj.weight                -> blk.6.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.6.post_attention_layernorm.weight   -> blk.6.ffn_norm.weight                    | BF16   | [7168]
model.layers.6.self_attn.k_proj.weight           -> blk.6.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.6.self_attn.o_proj.weight           -> blk.6.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.6.self_attn.q_proj.weight           -> blk.6.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.6.self_attn.v_proj.weight           -> blk.6.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.7.mlp.gate_proj.weight              -> blk.7.ffn_gate.weight                    | BF16   | [20480, 7168]
model.layers.7.mlp.up_proj.weight                -> blk.7.ffn_up.weight                      | BF16   | [20480, 7168]
model.layers.7.self_attn.k_proj.weight           -> blk.7.attn_k.weight                      | BF16   | [1024, 7168]
model.layers.7.self_attn.o_proj.weight           -> blk.7.attn_output.weight                 | BF16   | [7168, 7168]
model.layers.7.self_attn.q_proj.weight           -> blk.7.attn_q.weight                      | BF16   | [7168, 7168]
model.layers.7.self_attn.v_proj.weight           -> blk.7.attn_v.weight                      | BF16   | [1024, 7168]
model.layers.10.input_layernorm.weight           -> blk.10.attn_norm.weight                  | BF16   | [7168]
model.layers.10.mlp.down_proj.weight             -> blk.10.ffn_down.weight                   | BF16   | [7168, 20480]
model.layers.10.mlp.gate_proj.weight             -> blk.10.ffn_gate.weight                   | BF16   | [20480, 7168]
model.layers.10.mlp.up_proj.weight               -> blk.10.ffn_up.weight                     | BF16   | [20480, 7168]
model.layers.10.post_attention_layernorm.weight  -> blk.10.ffn_norm.weight                   | BF16   | [7168]
model.layers.10.self_attn.k_proj.weight          -> blk.10.attn_k.weight                     | BF16   | [1024, 7168]
model.layers.10.self_attn.o_proj.weight          -> blk.10.attn_output.weight                | BF16   | [7168, 7168]
model.layers.10.self_attn.q_proj.weight          -> blk.10.attn_q.weight                     | BF16   | [7168, 7168]
model.layers.10.self_attn.v_proj.weight          -> blk.10.attn_v.weight                     | BF16   | [1024, 7168]
# omitting for brevity
model.layers.59.post_attention_layernorm.weight  -> blk.59.ffn_norm.weight                   | BF16   | [7168]
model.norm.weight                                -> output_norm.weight                       | BF16   | [7168]
Writing local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf, format 1
Ignoring added_tokens.json since model matches vocab size without it.
gguf: This GGUF file is for Little Endian only
gguf: Setting special token type bos to 1
gguf: Setting special token type eos to 2
gguf: Setting add_bos_token to False
gguf: Setting add_eos_token to False
gguf: Setting chat_template to {% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
[  1/543] Writing tensor token_embd.weight                      | size  64000 x   7168  | type F16  | T+   4
[  2/543] Writing tensor blk.0.attn_norm.weight                 | size   7168           | type F32  | T+   4
[  3/543] Writing tensor blk.0.ffn_down.weight                  | size   7168 x  20480  | type F16  | T+   4
[  4/543] Writing tensor blk.0.ffn_gate.weight                  | size  20480 x   7168  | type F16  | T+   4
[  5/543] Writing tensor blk.0.ffn_up.weight                    | size  20480 x   7168  | type F16  | T+   5
[  6/543] Writing tensor blk.0.ffn_norm.weight                  | size   7168           | type F32  | T+   5
[  7/543] Writing tensor blk.0.attn_k.weight                    | size   1024 x   7168  | type F16  | T+   5
[  8/543] Writing tensor blk.0.attn_output.weight               | size   7168 x   7168  | type F16  | T+   5
[  9/543] Writing tensor blk.0.attn_q.weight                    | size   7168 x   7168  | type F16  | T+   5
[ 10/543] Writing tensor blk.0.attn_v.weight                    | size   1024 x   7168  | type F16  | T+   5
# omitting for brevity
[543/543] Writing tensor output_norm.weight                     | size   7168           | type F32  | T+ 279
Wrote local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf
./quantize local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf 2
00:13:27 | /mnt/valerie/llama.cpp
(.venv) git:(master | Δ) λ ./quantize local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf 2
main: build = 2005 (2aed77e)
main: built with cc (GCC) 13.2.1 20230801 for x86_64-pc-linux-gnu
main: quantizing 'local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf' to 'local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf' as Q4_0
llama_model_loader: loaded meta data with 21 key-value pairs and 543 tensors from local/models/01-ai/Yi-34B-Chat/ggml-model-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 7168
llama_model_loader: - kv   4:                          llama.block_count u32              = 60
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 20480
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 56
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,64000]   = ["<unk>", "<|startoftext|>", "<|endof...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,64000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,64000]   = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type  f16:  422 tensors
llama_model_quantize_internal: meta size = 1509664 bytes
[   1/ 543]                    token_embd.weight - [ 7168, 64000,     1,     1], type =    f16, quantizing to q4_0 .. size =   875.00 MiB ->   246.09 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021 
[   2/ 543]               blk.0.attn_norm.weight - [ 7168,     1,     1,     1], type =    f32, size =    0.027 MB
[   3/ 543]                blk.0.ffn_down.weight - [20480,  7168,     1,     1], type =    f16, quantizing to q4_0 .. size =   280.00 MiB ->    78.75 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.113 0.118 0.113 0.097 0.077 0.056 0.038 0.025 0.020 
[   4/ 543]                blk.0.ffn_gate.weight - [ 7168, 20480,     1,     1], type =    f16, quantizing to q4_0 .. size =   280.00 MiB ->    78.75 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021 
[   5/ 543]                  blk.0.ffn_up.weight - [ 7168, 20480,     1,     1], type =    f16, quantizing to q4_0 .. size =   280.00 MiB ->    78.75 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021 
[   6/ 543]                blk.0.ffn_norm.weight - [ 7168,     1,     1,     1], type =    f32, size =    0.027 MB
[   7/ 543]                  blk.0.attn_k.weight - [ 7168,  1024,     1,     1], type =    f16, quantizing to q4_0 .. size =    14.00 MiB ->     3.94 MiB | hist: 0.034 0.009 0.013 0.021 0.034 0.055 0.090 0.148 0.220 0.148 0.091 0.056 0.034 0.021 0.013 0.012 
[   8/ 543]             blk.0.attn_output.weight - [ 7168,  7168,     1,     1], type =    f16, quantizing to q4_0 .. size =    98.00 MiB ->    27.56 MiB | hist: 0.036 0.014 0.023 0.037 0.055 0.076 0.098 0.115 0.123 0.115 0.098 0.076 0.054 0.037 0.023 0.019 
[   9/ 543]                  blk.0.attn_q.weight - [ 7168,  7168,     1,     1], type =    f16, quantizing to q4_0 .. size =    98.00 MiB ->    27.56 MiB | hist: 0.035 0.010 0.015 0.024 0.037 0.058 0.089 0.142 0.212 0.142 0.090 0.058 0.037 0.024 0.015 0.013 
[  10/ 543]                  blk.0.attn_v.weight - [ 7168,  1024,     1,     1], type =    f16, quantizing to q4_0 .. size =    14.00 MiB ->     3.94 MiB | hist: 0.036 0.014 0.023 0.035 0.052 0.074 0.098 0.119 0.130 0.119 0.098 0.074 0.052 0.035 0.023 0.019 
# omitting for brevity
[ 543/ 543]                   output_norm.weight - [ 7168,     1,     1,     1], type =    f32, size =    0.027 MB
llama_model_quantize_internal: model size  = 65593.31 MB
llama_model_quantize_internal: quant size  = 18563.29 MB
llama_model_quantize_internal: hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021 

main: quantize time = 151675.93 ms
main:    total time = 151675.93 ms
./main -m local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf --color -e -s 1337 -c 4096 -n 1024 --n-gpu-layers 16 -p "<|system|> My name is Yi. I am an advanced LLM (Large Language Model). I am a intelligent, creative, and helpful assistant. <|system|>\n" --interactive --interactive-first --multiline-input --in-prefix "<|im_start|>user\n" --in-suffix " <|im_end|>\n<|im_start|>assistant\n
00:25:53 | /mnt/valerie/llama.cpp
(.venv) git:(master | Δ) λ ./main -m local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf --color -e -s 1337 -c 4096 -n 1024 --n-gpu-layers 16 -p "<|system|> My name is Yi. I am an advanced LLM (Large Language Model). I am a intelligent, creative, and helpful assistant. <|system|>\n" --interactive --interactive-first --multiline-input --in-prefix "<|im_start|>user\n" --in-suffix " <|im_end|>\n<|im_start|>assistant\n"
warning: not compiled with GPU offload support, --n-gpu-layers option will be ignored
warning: see main README.md for information on enabling GPU BLAS support
Log start
main: build = 2005 (2aed77e)
main: built with cc (GCC) 13.2.1 20230801 for x86_64-pc-linux-gnu
main: seed  = 1337
llama_model_loader: loaded meta data with 22 key-value pairs and 543 tensors from local/models/01-ai/Yi-34B-Chat/ggml-model-q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 7168
llama_model_loader: - kv   4:                          llama.block_count u32              = 60
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 20480
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 56
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,64000]   = ["<unk>", "<|startoftext|>", "<|endof...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,64000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,64000]   = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q4_0:  421 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: mismatch in special tokens definition ( 498/64000 vs 267/64000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 64000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_head           = 56
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 60
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 20480
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 5000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 30B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 34.39 B
llm_load_print_meta: model size       = 18.13 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<|startoftext|>'
llm_load_print_meta: EOS token        = 2 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 315 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.21 MiB
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/61 layers to GPU
llm_load_tensors:        CPU buffer size = 18563.29 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 5000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   960.00 MiB
llama_new_context_with_model: KV self size  =  960.00 MiB, K (f16):  480.00 MiB, V (f16):  480.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    22.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   539.00 MiB
llama_new_context_with_model: graph splits (measure): 1

system_info: n_threads = 8 / 16 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
main: interactive mode on.
Input prefix: '<|im_start|>user
'
Input suffix: ' <|im_end|>
<|im_start|>assistant
'
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp 
generate: n_ctx = 4096, n_batch = 512, n_predict = 1024, n_keep = 0


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - To return control to LLaMa, end your input with '\'.
 - To return control without starting a new line, end your input with '/'.

 <|system|> My name is Yi. I am an advanced LLM (Large Language Model). I am a intelligent, creative, and helpful assistant. <|system|>
<|im_start|>user
Hello! My name is Austin. What is your name?\ 
 <|im_end|>
<|im_start|>assistant
Hello, Austin! My name is Yi. I am an advanced LLM (Large Language Model) developed by 01.AI, and I am here to assist you with various tasks and provide helpful information. How can I help you today?<|im_end|>

Austin
That's great to know, Yi! I'm excited to work with you. Can you tell me more about what Large Language Models<|im_start|>user


llama_print_timings:        load time =    1313.28 ms
llama_print_timings:      sample time =      15.91 ms /    86 runs   (    0.19 ms per token,  5404.05 tokens per second)
llama_print_timings: prompt eval time =   23004.01 ms /    62 tokens (  371.03 ms per token,     2.70 tokens per second)
llama_print_timings:        eval time =   36849.81 ms /    86 runs   (  428.49 ms per token,     2.33 tokens per second)
llama_print_timings:       total time =  512113.82 ms /   148 tokens

@teleprint-me
Copy link
Contributor

teleprint-me commented Jan 30, 2024

I did a bit more digging and found that this issue is 2-fold.

The first issue would be add the LlamaForCausalLM to the convert-hf-to-gguf.py script. This is the easy part.

The second part has a higher difficulty curve.

If you use the --awq-path flag, it might work if the LlamaForCausalLM conversion is added to the factory. It looks like support is being looked into in #4701. I don't know what happened to the convert-gtpq-to-ggml.py script, but I know the other scripts were all merged and ggml was deprecated when v3 was released.

According to issue #4701:

unpack_awq: This feature is being introduced into AutoGPTQ in order to unpack the weights of AWQ. This may be another solution for unpacking.

Feel free to correct me if I'm wrong or misinterpreted anything.

@x4080
Copy link

x4080 commented Jan 30, 2024

Hi, I have problem for converting https://huggingface.co/SJ-Donald/SJ-SOLAR-10.7b-DPO , anybody has solve for solar based ?

using

--vocab-type hfft

generation token is not correct

<|im_start|>system<0x0A>

any hint ?

@felladrin
Copy link
Contributor

Hi, I have problem for converting https://huggingface.co/SJ-Donald/SJ-SOLAR-10.7b-DPO , anybody has solve for solar based ?

using

--vocab-type hfft

generation token is not correct

<|im_start|>system<0x0A>

any hint ?

There's an open PR that fixes it:

@x4080
Copy link

x4080 commented Feb 3, 2024

@felladrin thanks

Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 18, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants