-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Huge performance discrepency between llama-cpp-python and llama.cpp #398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Your timings show that the issue is with the load time. It's confounded by the issue with the timings not getting reported correctly, but there's no other possibility. llama-cpp-python and llama.cpp's My guess, however, would be that something a bit more complicated is going on, to slow down the file I/O within the library. |
@johncronan |
Just linking back to the last big performance effort, in #232. There was a lot of detailed info there which might be useful in figuring out where this lingering problem might lie. |
@johncronan The only difference I observed aligns with the GPU utilization that I reported in the bug report. When performing inference, the GPU consistently runs at full power when using llama.cpp, but there is a gradual drop in GPU utilization when using llama-cpp-python. Based on these observations, I believe that the token generation timing reported by llama-cpp-python is incorrect. LLama.cpp: LLama-cpp-python: I power limited my 4090 to 320W. |
And GPU usage every 0.2 seconds: Llama-cpp-python: |
That's pretty definitive, then. Sorry to lead you astray! |
I also noticed the difference in GPU usage on macos (entirely different gpu backend) with this wrapper, but only for a 30B model, around 50% GPU usage during inference. Also tried an example server from llama.cpp ./server, it has ~95% GPU usage most of the time. |
Suggested by Reddit user VoidStarFoo in https://www.reddit.com/r/LocalLLaMA/comments/14evg0g/comment/joxwqyh/?utm_source=reddit&utm_medium=web2x&context=3, forgive me if I'm misunderstanding or this is solved already. llama-cpp-python/llama_cpp/llama.py Line 433 in 3e7eae4
There's an np concatenate inside a loop. If I understand the timings in that thread, 17% of time was spent there in a test. np concatenate returns a whole new array, it doesn't update the old one in place, so if I understand correctly, there's unnecessary copying going on. I don't know how the array works and don't have time to test this at the moment. If this is actually the issue, fixing could use standard fixed-size array operations: If you know the size the array is going to grow to, you can just pre-allocate all the space with np zeros where it's initialized and update in place with something like array[i, :] = new_row, I think. I don't really know numpy and the indexing might be different if I'm not understanding how the dimensions work, but there is definitely a way to update an array in place. If you don't know what size it is going to grow to, you can at least pre-allocate it to some large size and then whenever it's about to overflow, create a new array that is twice as long, copy over everything so far into the first half, and keep updating in place. Typical Python lists already do something like this, but it's my understanding that there's a reason for using np array in this case. |
Wanted to chip in on #420 pip uninstall llama-cpp-python
git clone https://github.com/samfundev/llama-cpp-python.git llama-cpp-python-samfundev
cd llama-cpp-python-samfundev
# Grab vendor/llama.cpp
git submodule init
git submodule update
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install -U --no-cache-dir . You might want to do I notice a small speedup (and I haven't formally benchmarked), but then again llama-cpp-python was never that much slower for me in the first place. |
@elmisback thanks for the suggestion here. I think I was able to improve this further by keeping scores in a single numpy array that's only allocated once when you instantiate the model, this avoids any resizing / re-allocations during sampling as well. Let me know if sampling runs faster. |
I worked around the performance issue with llama-cpp-python by degrade it to 0.2.27 as discussed here: |
Uh oh!
There was an error while loading. Please reload this page.
Summary:
When testing the latest version of llama-cpp-python (0.1.64) alongside the corresponding commit of llama.cpp, I observed that llama.cpp performs significantly faster than llama-cpp-python in terms of total time taken to execute. Additionally, GPU utilization is consistently higher for llama.cpp compared to llama-cpp-python.
Environment:
Background
First, I updated the textgen-webui requirement to include the latest version of llama-cpp-python (0.1.64) manually. After installing the update, I ran tests and saw that the speed improved, but it was still much slower than llama.cpp.
To focus on llama-cpp-python's role, I wrote code to test llama-cpp-python separately.
Steps to Reproduce:
llama-cpp-python
conda list llama-cpp-python
make sure the version is
0.1.64
Python test.py
.llama.cpp
Expected Outcome:
Similar performance and GPU utilization between llama-cpp-python and llama.cpp.
Actual Outcome:
Output of llama-cpp-python:
Output of llama.cpp:
total time
is significantly larger than the sum ofsample time + prompt eval time + eval time
. In contrast, these times are consistent for llama.cpp.Updated Findings
I conducted more tests and discovered additional facts that could be useful in solving the problem:
total time != sample time + prompt eval time + eval time
issue.It seems that the problem has existed for quite some time. When llama.cpp was slow, it wasn't very noticeable, but now that llama.cpp is fast, it is much more evident.
I would appreciate it if this performance discrepancy could be investigated and addressed. Thank you!
The text was updated successfully, but these errors were encountered: