-
Notifications
You must be signed in to change notification settings - Fork 1.1k
crash on macos with SIGABRT #342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
if it helps, I stepped through the code in a debugger and it runs into a problem here:
upon returning from this function, the following assertion fails in
the assertion fails under debugger but when running from command line the assertions are disabled and so the program continues and crashes later on. |
adding some more notes for self: we do see this line in the output of
so then the only code left is what comes afterwards... and it does work when running |
running docker image errors out as well:
am i the only one with this problem? can't be. |
Illegal instruction usually indicates that a binary code is compiled for the wrong architecture, i.e. this is a compiler configuration issue. |
correct. it seems the docker image has precompiled binary of llama.cpp for
different architecture. also see:
ggml-org/llama.cpp#537
anyway the problem persists. i was trying to use docker as an alternative
to see if that works. can someone please help me here? i can't believe i am
the only one having this problem.
…On Thu, Jun 8, 2023 at 8:17 AM Gary Mulder ***@***.***> wrote:
Illegal instruction usually indicates that a binary code is compiled for
the wrong architecture, i.e. this is a compiler configuration issue.
—
Reply to this email directly, view it on GitHub
<#342 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6NWEK7NMBTCN6ADYPT6ZZ3XKHUG5ANCNFSM6AAAAAAY6QAGOQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
adding some more info to help anyone who runs into this problem: i tried pyllamacpp and it works (nevermind the garbage output):
// Register events
|
Closing please reopen if the problem is reproducible with the latest |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Observed Behavior
I am running v0.1.57 of the program with model weights from https://huggingface.co/TheBloke/vicuna-7B-1.1-GGML on an intel based MacOS with 16GB RAM and 6GB of used memory. I have been able to install the application but when I try to run it I get this:
The Python interpreter just crashes with a
SIGABRT
. there is no traceback printed on the screen.Expected Behavior
no error
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
I am running on Mac OS Ventura with intel based 6 core CPU and 16GB of RAM. 6GB is used.
$ uname -a
Failure Information (for bugs)
https://gist.github.com/siddhsql/ea1d8b0289896a7a0748504f6802c8ac
Steps to Reproduce
see above.
Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython setup.py develop
cd ./vendor/llama.cpp
cmake
llama.cpp./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cppI did try this and it works. please see below:
Failure Logs
https://gist.github.com/siddhsql/ea1d8b0289896a7a0748504f6802c8ac
The text was updated successfully, but these errors were encountered: