Skip to content

Commit 53268cb

Browse files
YixinSong-ehodlen
andauthored
Readme reorg (ggml-org#12)
* add TLDR and hw support * enrich features section * update model weights * minor on README commands * minor on features * Update README.md --------- Co-authored-by: Holden <[email protected]>
1 parent 603c771 commit 53268cb

File tree

1 file changed

+38
-32
lines changed

1 file changed

+38
-32
lines changed

README.md

Lines changed: 38 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
2-
---
2+
3+
## TL;DR
4+
PowerInfer is a CPU/GPU LLM inference engine leveraging **activation locality** for your device.
35

46
## Demo 🔥
57

@@ -9,49 +11,50 @@ PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP1
911

1012
<sub>Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.</sub>
1113

12-
---
1314
## Abstract
1415

1516
We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC)
16-
equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality
17+
equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high **locality**
1718
inherent in LLM inference, characterized by a power-law distribution in neuron activation.
19+
1820
This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated
1921
across inputs, while the majority, cold neurons, vary based on specific inputs.
2022
PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine:
2123
hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed
2224
on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.
2325
PowerInfer further integrates adaptive predictors and neuron-aware sparse operators,
2426
optimizing the efficiency of neuron activation and computational sparsity.
27+
2528
Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU,
2629
only 18\% lower than that achieved by a top-tier server-grade A100 GPU.
2730
This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.
2831

29-
## Feature
30-
PowerInfer is a high-speed and easy-to-use inference engine for deploying LLM locally. Interestingly, we observe that in ReLU LLM, every neuron is an expert! And a small subset of neurons consistently contributes to the output.
32+
## Features
33+
PowerInfer is a high-speed and easy-to-use inference engine for deploying LLMs locally.
34+
3135
PowerInfer is fast with:
3236

33-
- Exploiting the high locality in LLM inference
34-
- Neuron-aware hybrid CPU/GPU sparse operator
35-
- Neuron granularity offloading
37+
- **Locality-centric design**: Utilizes sparse activation and 'hot'/'cold' neuron concept for efficient LLM inference, ensuring high speed with lower resource demands.
38+
- **Hybrid CPU/GPU Utilization**: Seamlessly integrates memory/computation capabilities of CPU and GPU for balanced workload and faster processing.
3639

3740
PowerInfer is flexible and easy to use with:
3841

39-
- Integration with popular [ReLU-sparse models](https://huggingface.co/SparseLLM)
40-
- Low-latency serving locally with one single consumer-grade GPU
42+
- **Easy Integration**: Compatible with popular [ReLU-sparse models](https://huggingface.co/SparseLLM) as accurate as their dense counterparts.
43+
- **Local Deployment Ease**: Designed and deeply optimized for local deployment on consumer-grade hardwares, enabling low-latency LLM inference and serving on a single GPU.
44+
- **Backward Compatibility**: While distinct from llama.cpp, you can make use of most of `examples/` the same way as llama.cpp such as server and batched generation. PowerInfer also supports inference with llama.cpp's model weights for compatibility purpose, but there will be no performance gain.
4145

42-
PowerInfer supports the following models:
46+
You can use these models with PowerInfer today:
4347

44-
- Falcon-40B model
45-
- Llama family models
48+
- Falcon-40B
49+
- Llama2 family
4650

47-
Now PowerInfer supports the following architectures:
51+
We have tested PowerInfer on the following platforms:
4852

49-
- Intel CPU with AVX2 instructions
50-
- Nvidia GPU
53+
- x86-64 CPU (with AVX2 instructions) on Linux
54+
- x86-64 CPU and NVIDIA GPU on Linux
55+
- Apple M Chips on macOS (As we do not optimize for Mac, the performance improvement is not significant now.)
5156

5257

53-
54-
5558
## Getting Started
5659

5760
- [Installation](##setup--installation)
@@ -67,7 +70,7 @@ cd PowerInfer
6770
### Build
6871
In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.
6972

70-
Using `make` on Linux or MacOS:
73+
Using `make` on Linux or macOS:
7174
```bash
7275
make
7376
```
@@ -85,31 +88,34 @@ cmake --build build --config Release
8588
```
8689

8790
## Model Weights
88-
As for now, we have not released the predictor training code, we suggest you download the sparse model from huggingface in the following link.
89-
| Base Model | GGUF Format Link | Original Model |
90-
|------------|------------------|----------------|
91-
| LLaMA(ReLU)-2-7B | [PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) |
92-
| LLaMA(ReLU)-2-13B | [PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) |
93-
| Falcon(ReLU)-40B | [PowerInfer/ReluFalcon-40B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluFalcon-40B-PowerInfer-GGUF) | [SparseLLM/ReluFalcon-40B](https://huggingface.co/SparseLLM/ReluFalcon-40B) |
94-
| LLaMA(ReLU)-2-70B | [PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B) |
91+
92+
PowerInfer models are stored in a special format called *PowerInfer GGUF* based on GGUF format, consisting of both LLM weights and predictor weights. You can download PowerInfer GGUF weights from Hugging Face or convert them from the original model weights and predictor weights.
93+
94+
| Base Model | PowerInfer GGUF Format | Original Model | Predictor |
95+
|------------|------------------|----------------|---------------------|
96+
| LLaMA(ReLU)-2-7B | [PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) | [PowerInfer/ReluLLaMA-7B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-7B-Predictor)
97+
| LLaMA(ReLU)-2-13B | [PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) | [PowerInfer/ReluLLaMA-13B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-13B-Predictor)
98+
| Falcon(ReLU)-40B | [PowerInfer/ReluFalcon-40B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluFalcon-40B-PowerInfer-GGUF) | [SparseLLM/ReluFalcon-40B](https://huggingface.co/SparseLLM/ReluFalcon-40B) | [PowerInfer/ReluFalcon-40B-Predictor](https://huggingface.co/PowerInfer/ReluFalcon-40B-Predictor)
99+
| LLaMA(ReLU)-2-70B | [PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B) | [PowerInfer/ReluLLaMA-70B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-70B-Predictor)
95100

96101
## Inference
97-
- If you just have CPU:
102+
103+
For CPU-only and CPU-GPU hybrid inference with all available VRAM, you can use the following instructions to run PowerInfer:
98104
```bash
99-
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt)
105+
./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
100106
```
101-
- If you have CPU with one GPU:
107+
If you want to limit the VRAM usage of GPU:
102108
```bash
103-
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) --vram-budget $(GPU_VRAM_OFFLOADING)
109+
./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb
104110
```
105111

106112
As for now, it requires an offline-generated "GPU index" file to split FFNs on GPU. If you want to try it, please use the following instructions to generate the GPU index file:
107113
```bash
108-
python scripts/export-gpu-split.py $(activation_count_path) $(output_idx_path) solver
114+
python scripts/export-gpu-split.py $activation_count_path $output_idx_path solver
109115
```
110116
Then, you can use the following instructions to run PowerInfer with GPU index:
111117
```bash
112-
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) --gpu-index $(split_path)
118+
./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --gpu-index $split_path
113119
```
114120

115121
## Evaluation

0 commit comments

Comments
 (0)