Skip to content

Issue in convert-lora-to-ggml.py #4940

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ragesh2000 opened this issue Jan 14, 2024 · 6 comments
Closed

Issue in convert-lora-to-ggml.py #4940

ragesh2000 opened this issue Jan 14, 2024 · 6 comments

Comments

@ragesh2000
Copy link

ragesh2000 commented Jan 14, 2024

I was trying to create a ggml using lora weight using the following code.
python convert-lora-to-ggml.py --model /home/ragesh/Documents/conversion/mistral-7b-instruct-v0.1.Q8_0.gguf --lora /home/ragesh/Documents/conversion/adapter_model.bin
But it producing the following file not found error

Traceback (most recent call last):
  File "convert-lora-to-ggml.py", line 63, in <module>
    model = torch.load(input_model, map_location="cpu")
  File "/home/ragesh/miniconda3/envs/llamacpp/lib/python3.8/site-packages/torch/serialization.py", line 988, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ragesh/miniconda3/envs/llamacpp/lib/python3.8/site-packages/torch/serialization.py", line 437, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ragesh/miniconda3/envs/llamacpp/lib/python3.8/site-packages/torch/serialization.py", line 417, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '--model/adapter_model.bin'

I am sure that the base model and lora weight are present at the specified path. Whats happening here ? Am I doing something wrong ?

@amygbAI
Copy link

amygbAI commented Jan 16, 2024

Hi @ragesh2000 , this might help you out ..have explained a few things before closing the bug myself
#4896

@ragesh2000
Copy link
Author

Hi @amygbAI To follow the blog you mentioned I should have the adapter_model.bin". But what I have is adapter_model.safetensors". How to convert this to .bin ?

@amygbAI
Copy link

amygbAI commented Jan 24, 2024 via email

@ragesh2000
Copy link
Author

what you mentioned was to save using torch.save(trainer.model.state_dict(), f"{script_args.output_dir}/adapter_model.bin"),
But the training is already done. Now how can i save trainer,model_state ? @amygbAI

@xuefeng-xu
Copy link

Merge the adapters into the base model: model = model.merge_and_unload()
See https://huggingface.co/docs/trl/main/en/use_model#use-adapters-peft

@github-actions github-actions bot added the stale label Apr 6, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants