Skip to content

Commit d13c652

Browse files
committed
[Doc] Update doc to work with release
Signed-off-by: wangxiyuan <[email protected]>
1 parent 17de078 commit d13c652

File tree

11 files changed

+120
-133
lines changed

11 files changed

+120
-133
lines changed

.github/workflows/vllm_ascend_test.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ jobs:
4848
runs-on: ascend-arm64 # actionlint-ignore: runner-label
4949

5050
container:
51-
image: quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10
51+
image: quay.io/ascend/cann:8.0.0-910b-ubuntu22.04-py3.10
5252
volumes:
5353
- /usr/local/dcmi:/usr/local/dcmi
5454
- /usr/local/bin/npu-smi:/usr/local/bin/npu-smi

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
# limitations under the License.
1616
#
1717

18-
FROM quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10
18+
FROM quay.io/ascend/cann:8.0.0-910b-ubuntu22.04-py3.10
1919

2020
# Define environments
2121
ENV DEBIAN_FRONTEND=noninteractive

docs/source/conf.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,10 @@
6565
'vllm_version': 'main',
6666
# the branch of vllm-ascend, used in vllm-ascend clone and image tag
6767
# such as 'main', 'v0.7.1-dev', 'v0.7.1rc1'
68-
'vllm_ascend_version': 'main'
68+
'vllm_ascend_version': 'main',
69+
# the newest release version of vllm, used in quick start or container image tag.
70+
# This value should be updated when cut down release.
71+
'vllm_newest_release_version': "v0.7.1.rc1",
6972
}
7073

7174
# Add any paths that contain templates here, relative to this directory.

docs/source/developer_guide/contributing.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -98,8 +98,9 @@ Only specific types of PRs will be reviewed. The PR title is prefixed appropriat
9898
- `[CI]` for build or continuous integration improvements.
9999
- `[Misc]` for PRs that do not fit the above categories. Please use this sparingly.
100100

101-
> [!NOTE]
102-
> If the PR spans more than one category, please include all relevant prefixes.
101+
:::{note}
102+
If the PR spans more than one category, please include all relevant prefixes.
103+
:::
103104

104105
## Others
105106

docs/source/developer_guide/versioning_policy.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,15 +43,15 @@ Usually, each minor version of vLLM (such as 0.7) will correspond to a vllm-asce
4343
| Branch | Status | Note |
4444
|-----------|------------|--------------------------------------|
4545
| main | Maintained | CI commitment for vLLM main branch |
46-
| 0.7.1-dev | Maintained | CI commitment for vLLM 0.7.1 version |
46+
| v0.7.1-dev | Maintained | CI commitment for vLLM 0.7.1 version |
4747

4848
## Release Compatibility Matrix
4949

5050
Following is the Release Compatibility Matrix for vLLM Ascend Plugin:
5151

5252
| vllm-ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu |
5353
|--------------|--------------| --- | --- | --- |
54-
| v0.7.x (TBD) | v0.7.x (TBD) | 3.9 - 3.12 | 8.0.0.beta1 | 2.5.1 / 2.5.1rc1 |
54+
| v0.7.1.rc1 | v0.7.1 | 3.9 - 3.12 | 8.0.0 | 2.5.1 / 2.5.1.dev20250218 |
5555

5656
## Release cadence
5757

docs/source/developer_guide/versioning_policy.zh.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,15 +43,15 @@ vllm-ascend有主干和开发两种分支。
4343
| 分支 | 状态 | 备注 |
4444
|-----------|------------|--------------------------------------|
4545
| main | Maintained | 基于vLLM main分支CI看护 |
46-
| 0.7.1-dev | Maintained | 基于vLLM 0.7.1版本CI看护 |
46+
| v0.7.1-dev | Maintained | 基于vLLM 0.7.1版本CI看护 |
4747

4848
## 版本配套
4949

5050
vLLM Ascend Plugin (`vllm-ascend`) 的关键配套关系如下:
5151

5252
| vllm-ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu |
5353
|--------------|---------| --- | --- | --- |
54-
| v0.7.x (TBD) | v0.7.x (TBD) | 3.9 - 3.12 | 8.0.0.beta1 | 2.5.1 / 2.5.1rc1 |
54+
| v0.7.1rc1 | v0.7.1 | 3.9 - 3.12 | 8.0.0 | 2.5.1 / 2.5.1.dev20250218 |
5555

5656
## 发布节奏
5757

docs/source/installation.md

Lines changed: 56 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ This document describes how to install vllm-ascend manually.
1111

1212
| Software | Supported version | Note |
1313
| ------------ | ----------------- | ---- |
14-
| CANN | >= 8.0.0.beta1 | Required for vllm-ascend and torch-npu |
14+
| CANN | >= 8.0.0 | Required for vllm-ascend and torch-npu |
1515
| torch-npu | >= 2.5.1rc1 | Required for vllm-ascend |
1616
| torch | >= 2.5.1 | Required for torch-npu and vllm |
1717

@@ -46,7 +46,7 @@ The easiest way to prepare your software environment is using CANN image directl
4646

4747
```bash
4848
# Update DEVICE according to your device (/dev/davinci[0-7])
49-
DEVICE=/dev/davinci7
49+
export DEVICE=/dev/davinci7
5050

5151
docker run --rm \
5252
--name vllm-ascend-env \
@@ -59,11 +59,14 @@ docker run --rm \
5959
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
6060
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
6161
-v /etc/ascend_install.info:/etc/ascend_install.info \
62-
-it quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10 bash
62+
-it quay.io/ascend/cann:8.0.0-910b-ubuntu22.04-py3.10 bash
6363
```
6464

6565
You can also install CANN manually:
66-
> NOTE: This guide takes aarc64 as an example. If you run on x86, you need to replace `aarch64` with `x86_64` for the package name shown below.
66+
67+
:::{note}
68+
This guide takes aarch64 as an example. If you run on x86, you need to replace `aarch64` with `x86_64` for the package name shown below.
69+
:::
6770

6871
```bash
6972
# Create a virtual environment
@@ -83,11 +86,11 @@ chmod +x ./Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run
8386
./Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --install
8487

8588
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-nnal_8.0.0_linux-aarch64.run
86-
chmod +x./Ascend-cann-nnal_8.0.0_linux-aarch64.run
89+
chmod +x. /Ascend-cann-nnal_8.0.0_linux-aarch64.run
8790
./Ascend-cann-nnal_8.0.0_linux-aarch64.run --install
8891

8992
source /usr/local/Ascend/ascend-toolkit/set_env.sh
90-
source /usr/local/Ascend/nnal/set_env.sh
93+
source /usr/local/Ascend/nnal/atb/set_env.sh
9194
```
9295

9396
::::
@@ -112,7 +115,30 @@ Once it's done, you can start to set up `vllm` and `vllm-ascend`.
112115
You can install `vllm` and `vllm-ascend` from **pre-built wheel**:
113116

114117
```bash
115-
pip install vllm vllm-ascend -f https://download.pytorch.org/whl/torch/
118+
# Install vllm from source, since `pip install vllm` doesn't work on CPU currently.
119+
# It'll be fixed in the next vllm release, e.g. v0.7.3.
120+
git clone --branch v0.7.1 https://github.com/vllm-project/vllm
121+
cd vllm
122+
VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/
123+
124+
# Install vllm-ascend from pypi.
125+
pip install vllm-ascend -f https://download.pytorch.org/whl/torch/
126+
127+
# Once the packages are installed, you need to install `torch-npu` manually,
128+
# because that vllm-ascend relies on an unreleased version of torch-npu.
129+
# This step will be removed in the next vllm-ascend release.
130+
#
131+
# Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
132+
#
133+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py39.tar.gz
134+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py310.tar.gz
135+
# https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py311.tar.gz
136+
#
137+
mkdir pta
138+
cd pta
139+
wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py310.tar.gz
140+
tar -xvf pytorch_v2.5.1_py310.tar.gz
141+
pip install ./torch_npu-2.5.1.dev20250218-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
116142
```
117143

118144
or build from **source code**:
@@ -136,7 +162,9 @@ pip install -e . -f https://download.pytorch.org/whl/torch/
136162

137163
You can just pull the **prebuilt image** and run it with bash.
138164

139-
```bash
165+
```{code-block} bash
166+
:substitutions:
167+
140168
# Update DEVICE according to your device (/dev/davinci[0-7])
141169
DEVICE=/dev/davinci7
142170
# Update the vllm-ascend image
@@ -185,7 +213,7 @@ prompts = [
185213
]
186214

187215
# Create a sampling params object.
188-
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
216+
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
189217
# Create an LLM.
190218
llm = LLM(model="Qwen/Qwen2.5-0.5B-Instruct")
191219

@@ -207,25 +235,23 @@ python example.py
207235
The output will be like:
208236

209237
```bash
210-
INFO 02-18 02:33:37 __init__.py:28] Available plugins for group vllm.platform_plugins:
211-
INFO 02-18 02:33:37 __init__.py:30] name=ascend, value=vllm_ascend:register
212-
INFO 02-18 02:33:37 __init__.py:32] all available plugins for group vllm.platform_plugins will be loaded.
213-
INFO 02-18 02:33:37 __init__.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
214-
INFO 02-18 02:33:37 __init__.py:42] plugin ascend loaded.
215-
INFO 02-18 02:33:37 __init__.py:174] Platform plugin ascend is activated
216-
INFO 02-18 02:33:50 config.py:526] This model supports multiple tasks: {'reward', 'embed', 'generate', 'score', 'classify'}. Defaulting to 'generate'.
217-
INFO 02-18 02:33:50 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='Qwen/Qwen2.5-0.5B-Instruct', speculative_config=None, tokenizer='./opt-125m', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=npu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=./opt-125m, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
218-
INFO 02-18 02:33:52 importing.py:14] Triton not installed or not compatible; certain GPU-related functions will not be available.
219-
Loading pt checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
220-
Loading pt checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 4.30it/s]
221-
Loading pt checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 4.29it/s]
222-
223-
INFO 02-18 02:33:59 executor_base.py:108] # CPU blocks: 98559, # CPU blocks: 7281
224-
INFO 02-18 02:33:59 executor_base.py:113] Maximum concurrency for 2048 tokens per request: 769.99x
225-
INFO 02-18 02:33:59 llm_engine.py:429] init engine (profile, create kv cache, warmup model) took 1.52 seconds
226-
Processed prompts: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.92it/s, est. speed input: 31.99 toks/s, output: 78.73 toks/s]
227-
Prompt: 'Hello, my name is', Generated text: ' John, I am the daughter of Bill and Jocelyn, I am married'
228-
Prompt: 'The president of the United States is', Generated text: " States President. I don't like him.\nThis is my favorite comment so"
229-
Prompt: 'The capital of France is', Generated text: " Texas and everyone I've spoken to in the city knows the state's name,"
230-
Prompt: 'The future of AI is', Generated text: ' people trying to turn a good computer into a machine, not a computer being human'
238+
INFO 02-18 08:49:58 __init__.py:28] Available plugins for group vllm.platform_plugins:
239+
INFO 02-18 08:49:58 __init__.py:30] name=ascend, value=vllm_ascend:register
240+
INFO 02-18 08:49:58 __init__.py:32] all available plugins for group vllm.platform_plugins will be loaded.
241+
INFO 02-18 08:49:58 __init__.py:34] set environment variable VLLM_PLUGINS to control which plugins to load.
242+
INFO 02-18 08:49:58 __init__.py:42] plugin ascend loaded.
243+
INFO 02-18 08:49:58 __init__.py:174] Platform plugin ascend is activated
244+
INFO 02-18 08:50:12 config.py:526] This model supports multiple tasks: {'embed', 'classify', 'generate', 'score', 'reward'}. Defaulting to 'generate'.
245+
INFO 02-18 08:50:12 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='./Qwen2.5-0.5B-Instruct', speculative_config=None, tokenizer='./Qwen2.5-0.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=npu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=./Qwen2.5-0.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
246+
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
247+
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 5.86it/s]
248+
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 5.85it/s]
249+
INFO 02-18 08:50:24 executor_base.py:108] # CPU blocks: 35064, # CPU blocks: 2730
250+
INFO 02-18 08:50:24 executor_base.py:113] Maximum concurrency for 32768 tokens per request: 136.97x
251+
INFO 02-18 08:50:25 llm_engine.py:429] init engine (profile, create kv cache, warmup model) took 3.87 seconds
252+
Processed prompts: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 8.46it/s, est. speed input: 46.55 toks/s, output: 135.41 toks/s]
253+
Prompt: 'Hello, my name is', Generated text: " Shinji, a teenage boy from New York City. I'm a computer science"
254+
Prompt: 'The president of the United States is', Generated text: ' a very important person. When he or she is elected, many people think that'
255+
Prompt: 'The capital of France is', Generated text: ' Paris. The oldest part of the city is Saint-Germain-des-Pr'
256+
Prompt: 'The future of AI is', Generated text: ' not bright\n\nThere is no doubt that the evolution of AI will have a huge'
231257
```

docs/source/quick_start.md

Lines changed: 20 additions & 81 deletions
Original file line numberDiff line numberDiff line change
@@ -6,100 +6,40 @@
66
- Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
77
- Atlas 800I A2 Inference series (Atlas 800I A2)
88

9-
<!-- TODO(yikun): replace "Prepare Environment" and "Installation" with "Running with vllm-ascend container image" -->
10-
11-
### Prepare Environment
12-
13-
You can use the container image directly with one line command:
14-
15-
```bash
16-
# Update DEVICE according to your device (/dev/davinci[0-7])
17-
DEVICE=/dev/davinci7
18-
IMAGE=quay.io/ascend/cann:8.0.rc3.beta1-910b-ubuntu22.04-py3.10
19-
docker run \
20-
--name vllm-ascend-env --device $DEVICE \
21-
--device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc \
22-
-v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
23-
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
24-
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
25-
-v /etc/ascend_install.info:/etc/ascend_install.info \
26-
-v /root/.cache:/root/.cache \
27-
-it --rm $IMAGE bash
28-
```
29-
30-
You can verify by running below commands in above container shell:
31-
32-
```bash
33-
npu-smi info
34-
```
35-
36-
You will see following message:
37-
38-
```
39-
+-------------------------------------------------------------------------------------------+
40-
| npu-smi 23.0.2 Version: 23.0.2 |
41-
+----------------------+---------------+----------------------------------------------------+
42-
| NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page)|
43-
| Chip | Bus-Id | AICore(%) Memory-Usage(MB) HBM-Usage(MB) |
44-
+======================+===============+====================================================+
45-
| 0 xxx | OK | 0.0 40 0 / 0 |
46-
| 0 | 0000:C1:00.0 | 0 882 / 15169 0 / 32768 |
47-
+======================+===============+====================================================+
48-
```
49-
50-
51-
## Installation
52-
53-
Prepare:
54-
55-
```bash
56-
apt update
57-
apt install git curl vim -y
58-
# Config pypi mirror to speedup
59-
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
60-
```
61-
62-
Create your venv
63-
64-
```bash
65-
python3 -m venv .venv
66-
source .venv/bin/activate
67-
pip install --upgrade pip
68-
```
69-
70-
You can install vLLM and vllm-ascend plugin by using:
9+
## Setup environment using container
7110

7211
```{code-block} bash
7312
:substitutions:
7413
75-
# Install vLLM (About 5 mins)
76-
git clone --depth 1 --branch |vllm_version| https://github.com/vllm-project/vllm.git
77-
cd vllm
78-
VLLM_TARGET_DEVICE=empty pip install .
79-
cd ..
80-
81-
# Install vLLM Ascend Plugin:
82-
git clone --depth 1 --branch |vllm_ascend_version| https://github.com/vllm-project/vllm-ascend.git
83-
cd vllm-ascend
84-
pip install -e .
85-
cd ..
86-
```
14+
# You can change version a suitable one base on your requirement, e.g. main
15+
export IMAGE=ghcr.io/vllm-project/vllm-ascend:|vllm_newest_release_version|
8716
17+
docker run \
18+
--name vllm-ascend \
19+
--device /dev/davinci0 \
20+
--device /dev/davinci_manager \
21+
--device /dev/devmm_svm \
22+
--device /dev/hisi_hdc \
23+
-v /usr/local/dcmi:/usr/local/dcmi \
24+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
25+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
26+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
27+
-v /etc/ascend_install.info:/etc/ascend_install.info \
28+
-v /root/.cache:/root/.cache \
29+
-p 8000:8000 \
30+
-it $IMAGE bash
31+
```
8832

8933
## Usage
9034

91-
After vLLM and vLLM Ascend plugin installation, you can start to
92-
try [vLLM QuickStart](https://docs.vllm.ai/en/latest/getting_started/quickstart.html).
93-
94-
You have two ways to start vLLM on Ascend NPU:
35+
There are two ways to start vLLM on Ascend NPU:
9536

9637
### Offline Batched Inference with vLLM
9738

9839
With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing).
9940

10041
```bash
10142
# Use Modelscope mirror to speed up download
102-
pip install modelscope
10343
export VLLM_USE_MODELSCOPE=true
10444
```
10545

@@ -132,7 +72,6 @@ the following command to start the vLLM server with the
13272

13373
```bash
13474
# Use Modelscope mirror to speed up download
135-
pip install modelscope
13675
export VLLM_USE_MODELSCOPE=true
13776
# Deploy vLLM server (The first run will take about 3-5 mins (10 MB/s) to download models)
13877
vllm serve Qwen/Qwen2.5-0.5B-Instruct &
@@ -178,7 +117,7 @@ kill -2 $VLLM_PID
178117

179118
You will see output as below:
180119
```
181-
INFO 02-12 03:34:10 launcher.py:59] Shutting down FastAPI HTTP server.
120+
INFO: Shutting down FastAPI HTTP server.
182121
INFO: Shutting down
183122
INFO: Waiting for application shutdown.
184123
INFO: Application shutdown complete.

0 commit comments

Comments
 (0)