Skip to content

NameError: Could not load Llama model from path: D:\privateGPT\ggml-model-q4_0.bin #113

@michael7908

Description

@michael7908

I checked this issue with GPT-4 and this is what I got:

The error message is indicating that the Llama model you're trying to use is in an old format that is no longer supported. The error message suggests to visit a URL for more information: ggml-org/llama.cpp#1305.

As of my knowledge cutoff in September 2021, I can't provide direct insight into the specific contents of that pull request or the subsequent changes in the Llama library. You should visit the URL provided in the error message for the most accurate and up-to-date information.

However, based on the error message, it seems like you need to convert your Llama model to a new format that is supported by the current version of the Llama library. You should look for documentation or tools provided by the Llama library that can help you perform this conversion.

If the Llama model (ggml-model-q4_0.bin) was provided to you or downloaded from a third-party source, you might also want to check if there's an updated version of the model available in the new format.

Could you please help me out on this? Thank you in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    primordialRelated to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions