-
Notifications
You must be signed in to change notification settings - Fork 12k
violent crash on Mac Mini M2 8GB RAM when trying to use GPU #2141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@siddhsql you have a simple problem, you dont have enough ram (only 8gb) to run 13b models. see the sizes in the readme. you can run a 7b model fine. |
yes i understand that, thanks for the reply. but the program should not
crash violently like it did. it should abort but the computer choking up
and rebooting is not a good experience. i am afraid to run it on even 7B
now.
…On Fri, Jul 7, 2023 at 7:43 PM BarfingLemurs ***@***.***> wrote:
@siddhsql <https://github.com/siddhsql> you have a simple problem, you
dont have enough ram (only 8gb) to run 13b models. see the sizes in the
readme. you can run a 7b model fine.
—
Reply to this email directly, view it on GitHub
<#2141 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6NWEKZCN3JNRIBO7M3UY7DXPDCOFANCNFSM6AAAAAA2CJ72GA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Whenever there's an infinite loop, this behavior is typical of Apple Mac GPUs. The GPU is frozen and won't exit the command. After you reset the computer nothing is broken, looks much scarier than it is. |
where is the infinite loop in this case?
…On Tue, Aug 1, 2023 at 7:58 PM Philip Turner ***@***.***> wrote:
Whenever there's an infinite loop, this behavior is typical of Apple Mac
GPUs. The GPU is frozen and won't exit the command. After you reset the
computer nothing is broken, looks much scarier than it is.
—
Reply to this email directly, view it on GitHub
<#2141 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6NWEK7ZMM636MWU2W636LLXTG66LANCNFSM6AAAAAA2CJ72GA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It doesn't have to be an infinite loop. Sometimes, giving it a compute workload too heavy causes the GPU to go rogue as well. Often there is some kind of fault that happened internally, for example an out-of-bounds memory access. On iOS, the GPU goes rogue less often, because a watchdog aborts very long command buffers. On Mac, you have to restart the entire computer. Mac is probably this way to give more flexibility (don't have to actively check whether all your command buffers will fall under 100 ms). |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I have a M2 Mac Mini with 8 GB unified memory. I tried to run
llama.cpp
as explained here: #1642Current Behavior
My computer froze and rebooted after sometime. I got a brief flash of a pink screen of death. I re-tried several times and got the same behavior. Once instead of crashing, I got an assert in following code:
Each time I was able to see console output saying its trying to load GPU buffers similar to what we see in the video on #1642.
The model I was trying is
gpt4-x-vicuna-13B.ggmlv3.q5_K_M.bin
over here I see a funny comment:
how is crashing acceptable behaviour?
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
M2 Mac Mini w/ 8 GB memory
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
see above.
Failure Logs
The text was updated successfully, but these errors were encountered: