-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Issue in convert-lora-to-ggml.py #4940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @ragesh2000 , this might help you out ..have explained a few things before closing the bug myself |
Hi @amygbAI To follow the blog you mentioned I should have the adapter_model.bin". But what I have is adapter_model.safetensors". How to convert this to .bin ? |
Point #7 in the STEPS I have mentioned, while closing the bug
…On Wed, Jan 24, 2024, 10:50 AM Ragesh K R ***@***.***> wrote:
Hi @amygbAI <https://github.com/amygbAI> To follow the blog you mentioned
I should have the adapter_model.bin". But what I have is
adapter_model.safetensors". How to convert this to .bin ?
—
Reply to this email directly, view it on GitHub
<#4940 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ATIQOSGY7JELFVD3ROIYYLDYQCKYTAVCNFSM6AAAAABB2HNVOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBXGM4TMOBUGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
what you mentioned was to save using |
Merge the adapters into the base model: |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
I was trying to create a ggml using lora weight using the following code.
python convert-lora-to-ggml.py --model /home/ragesh/Documents/conversion/mistral-7b-instruct-v0.1.Q8_0.gguf --lora /home/ragesh/Documents/conversion/adapter_model.bin
But it producing the following file not found error
I am sure that the base model and lora weight are present at the specified path. Whats happening here ? Am I doing something wrong ?
The text was updated successfully, but these errors were encountered: