Skip to content

Error Converting Hugging Face Llama 3 finetuned Model to GUFF #6775

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
MontassarTn opened this issue Apr 19, 2024 · 5 comments
Closed

Error Converting Hugging Face Llama 3 finetuned Model to GUFF #6775

MontassarTn opened this issue Apr 19, 2024 · 5 comments

Comments

@MontassarTn
Copy link

  • I encountered an issue while attempting to convert my Hugging Face Llama 3 model to GUFF format using the provided command:
!python /content/llama.cpp/convert.py vicuna-hf \
  --outfile llama3_ocr_to_xml_A1.gguf \
  --outtype q8_0
  • Error Message:
    Loading model file vicuna-hf/model-00001-of-00004.safetensors
    Loading model file vicuna-hf/model-00001-of-00004.safetensors
    Loading model file vicuna-hf/model-00002-of-00004.safetensors
    Loading model file vicuna-hf/model-00003-of-00004.safetensors
    Loading model file vicuna-hf/model-00004-of-00004.safetensors
    params = Params(n_vocab=128256, n_embd=4096, n_layer=32, n_ctx=8192, n_ff=14336, n_head=32, n_head_kv=8, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=500000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=<GGMLFileType.MostlyF16: 1>, path_model=PosixPath('vicuna-hf'))
    Traceback (most recent call last):
    File "/content/llama.cpp/convert.py", line 1548, in
    main()
    File "/content/llama.cpp/convert.py", line 1515, in main
    vocab, special_vocab = vocab_factory.load_vocab(vocab_types, model_parent_path)
    File "/content/llama.cpp/convert.py", line 1417, in load_vocab
    vocab = self._create_vocab_by_path(vocab_types)
    File "/content/llama.cpp/convert.py", line 1407, in _create_vocab_by_path
    raise FileNotFoundError(f"Could not find a tokenizer matching any of {vocab_types}")
    FileNotFoundError: Could not find a tokenizer matching any of ['spm', 'hfft']
@LostRuins
Copy link
Collaborator

I believe #6745 needs to be merged first.

@MontassarTn
Copy link
Author

run with "--vocab-type bpe"

@MontassarTn
Copy link
Author

I add "--vocab-type bpe" but when I try to use model on vsCode the Kernel crashed and on LM Studio Error loading model
show up.

@MrVolts
Copy link

MrVolts commented Apr 20, 2024

Same issue here, I have not solved it

@phymbert
Copy link
Collaborator

Please wait for:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants