-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Allow local llama library usage #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I agree we may need an escape hatch of sorts at the moment (e.g #30 ) and @MillionthOdin16 has brought this up previously in #12 . I'm not sure the that using the local directory is the right solution, I would prefer something more explicit such as using an environment variable or even a custom function to update |
I made it into an environment variable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SagsMug great work
llama_cpp/llama_cpp.py
Outdated
@@ -25,6 +25,12 @@ def _load_shared_library(lib_base_name): | |||
_base_path / f"{lib_base_name}{lib_ext}" | |||
] | |||
|
|||
if ("LLAMA_LIB" in os.environ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SagsMug great work, can we change the environment variable to LLAMA_CPP_LIB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SagsMug great work, can we change the environment variable to
LLAMA_CPP_LIB
Done!
Sometimes its nice to try different library versions and different forks of llama etc (given they still use the same api).
This commit adds the local path as the first place to search for llama libraries.