@@ -11,7 +11,7 @@ This document describes how to install vllm-ascend manually.
1111
1212    |  Software     |  Supported version |  Note | 
1313    |  ------------ |  ----------------- |  ---- |  
14-     |  CANN         |  >= 8.0.0.beta1     |  Required for vllm-ascend and torch-npu | 
14+     |  CANN         |  >= 8.0.0           |  Required for vllm-ascend and torch-npu | 
1515    |  torch-npu    |  >= 2.5.1rc1       |  Required for vllm-ascend | 
1616    |  torch        |  >= 2.5.1          |  Required for torch-npu and vllm | 
1717
@@ -46,7 +46,7 @@ The easiest way to prepare your software environment is using CANN image directl
4646
4747``` bash 
4848#  Update DEVICE according to your device (/dev/davinci[0-7])
49- DEVICE=/dev/davinci7
49+ export   DEVICE=/dev/davinci7
5050
5151docker run --rm \
5252    --name vllm-ascend-env \
@@ -59,11 +59,14 @@ docker run --rm \
5959    -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
6060    -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
6161    -v /etc/ascend_install.info:/etc/ascend_install.info \
62-     -it quay.io/ascend/cann:8.0.0.beta1 -910b-ubuntu22.04-py3.10 bash
62+     -it quay.io/ascend/cann:8.0.0-910b-ubuntu22.04-py3.10 bash
6363``` 
6464
6565You can also install CANN manually:
66- >  NOTE: This guide takes aarc64 as an example. If you run on x86, you need to replace ` aarch64 `  with ` x86_64 `  for the package name shown below.
66+ 
67+ :::{note}
68+ This guide takes aarch64 as an example. If you run on x86, you need to replace ` aarch64 `  with ` x86_64 `  for the package name shown below.
69+ :::
6770
6871``` bash 
6972#  Create a virtual environment
@@ -83,11 +86,11 @@ chmod +x ./Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run
8386./Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --install
8487
8588wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-nnal_8.0.0_linux-aarch64.run
86- chmod +x./Ascend-cann-nnal_8.0.0_linux-aarch64.run
89+ chmod +x.  /Ascend-cann-nnal_8.0.0_linux-aarch64.run
8790./Ascend-cann-nnal_8.0.0_linux-aarch64.run --install
8891
8992source  /usr/local/Ascend/ascend-toolkit/set_env.sh
90- source  /usr/local/Ascend/nnal/set_env.sh
93+ source  /usr/local/Ascend/nnal/atb/ set_env.sh
9194``` 
9295
9396::::
@@ -112,7 +115,29 @@ Once it's done, you can start to set up `vllm` and `vllm-ascend`.
112115You can install ` vllm `  and ` vllm-ascend `  from ** pre-built wheel** :
113116
114117``` bash 
115- pip install vllm vllm-ascend -f https://download.pytorch.org/whl/torch/
118+ #  Install vllm from source, since `pip install vllm` doesn't work on CPU currently.
119+ #  It'll be fixed in the next vllm release, e.g. v0.7.3.
120+ git clone --branch v0.7.1 https://github.com/vllm-project/vllm
121+ cd  vllm
122+ VLLM_TARGET_DEVICE=empty pip install .  -f https://download.pytorch.org/whl/torch/
123+ 
124+ #  Install vllm-ascend from pypi.
125+ pip install vllm-ascend -f https://download.pytorch.org/whl/torch/
126+ 
127+ #  Once the packages are installed, you need to install `torch-npu` manually,
128+ #  because that vllm-ascend relies on an unreleased version of torch-npu.
129+ #  This step will be removed in the next vllm-ascend release.
130+ #  
131+ #  Here we take python 3.10 on aarch64 as an example. Feel free to install the correct version for your environment. See:
132+ #  https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py39.tar.gz
133+ #  https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py310.tar.gz
134+ #  https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py311.tar.gz
135+ # 
136+ mkdir pta
137+ cd  pta
138+ wget https://pytorch-package.obs.cn-north-4.myhuaweicloud.com/pta/Daily/v2.5.1/20250218.4/pytorch_v2.5.1_py310.tar.gz
139+ tar -xvf pytorch_v2.5.1_py310.tar.gz
140+ pip install ./torch_npu-2.5.1.dev20250218-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
116141``` 
117142
118143or build from ** source code** :
@@ -136,9 +161,10 @@ You can just pull the **prebuilt image** and run it with bash.
136161
137162``` bash 
138163#  Update DEVICE according to your device (/dev/davinci[0-7])
139- DEVICE=/dev/davinci7
140- #  Update the vllm-ascend image
141- IMAGE=quay.io/ascend/vllm-ascend:main
164+ export  DEVICE=/dev/davinci7
165+ #  You can change version a suitable one base on your requirement, e.g. main
166+ export  IMAGE=ghcr.io/vllm-project/vllm-ascend:v0.7.1.rc1
167+ 
142168docker pull $IMAGE 
143169docker run --rm \
144170    --name vllm-ascend-env \
@@ -183,7 +209,7 @@ prompts = [
183209]
184210
185211#  Create a sampling params object.
186- sampling_params =  SamplingParams(max_tokens = 100 ,  temperature = 0.0  )
212+ sampling_params =  SamplingParams(temperature = 0.8 ,  top_p = 0.95  )
187213#  Create an LLM.
188214llm =  LLM(model = " Qwen/Qwen2.5-0.5B-Instruct" 
189215
@@ -205,25 +231,23 @@ python example.py
205231The output will be like:
206232
207233``` bash 
208- INFO 02-18 02:33:37 __init__.py:28] Available plugins for  group vllm.platform_plugins:
209- INFO 02-18 02:33:37 __init__.py:30] name=ascend, value=vllm_ascend:register
210- INFO 02-18 02:33:37 __init__.py:32] all available plugins for  group vllm.platform_plugins will be loaded.
211- INFO 02-18 02:33:37 __init__.py:34] set  environment variable VLLM_PLUGINS to control which plugins to load.
212- INFO 02-18 02:33:37 __init__.py:42] plugin ascend loaded.
213- INFO 02-18 02:33:37 __init__.py:174] Platform plugin ascend is activated
214- INFO 02-18 02:33:50 config.py:526] This model supports multiple tasks: {' reward' ' embed' ' generate' ' score' ' classify' ' generate' 
215- INFO 02-18 02:33:50 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='Qwen/Qwen2.5-0.5B-Instruct', speculative_config=None, tokenizer='./opt-125m', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=npu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=./opt-125m, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
216- INFO 02-18 02:33:52 importing.py:14] Triton not installed or not compatible;  certain GPU-related functions will not be available.
217- Loading pt checkpoint shards:   0% Completed |  0/1 [00:00< ? , ? it/s]
218- Loading pt checkpoint shards: 100% Completed |  1/1 [00:00< 00:00,  4.30it/s]
219- Loading pt checkpoint shards: 100% Completed |  1/1 [00:00< 00:00,  4.29it/s]
220- 
221- INFO 02-18 02:33:59 executor_base.py:108] #  CPU blocks: 98559, # CPU blocks: 7281
222- INFO 02-18 02:33:59 executor_base.py:113] Maximum concurrency for  2048 tokens per request: 769.99x
223- INFO 02-18 02:33:59 llm_engine.py:429] init engine (profile, create kv cache, warmup model) took 1.52 seconds
224- Processed prompts: 100%| ██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|  4/4 [00:00< 00:00,  4.92it/s, est. speed input: 31.99 toks/s, output: 78.73 toks/s]
225- Prompt: ' Hello, my name is' '  John, I am the daughter of Bill and Jocelyn, I am married' 
226- Prompt: ' The president of the United States is' "  States President. I don't like him.\nThis is my favorite comment so" 
227- Prompt: ' The capital of France is' "  Texas and everyone I've spoken to in the city knows the state's name," 
228- Prompt: ' The future of AI is' '  people trying to turn a good computer into a machine, not a computer being human' 
234+ INFO 02-18 08:49:58 __init__.py:28] Available plugins for  group vllm.platform_plugins:
235+ INFO 02-18 08:49:58 __init__.py:30] name=ascend, value=vllm_ascend:register
236+ INFO 02-18 08:49:58 __init__.py:32] all available plugins for  group vllm.platform_plugins will be loaded.
237+ INFO 02-18 08:49:58 __init__.py:34] set  environment variable VLLM_PLUGINS to control which plugins to load.
238+ INFO 02-18 08:49:58 __init__.py:42] plugin ascend loaded.
239+ INFO 02-18 08:49:58 __init__.py:174] Platform plugin ascend is activated
240+ INFO 02-18 08:50:12 config.py:526] This model supports multiple tasks: {' embed' ' classify' ' generate' ' score' ' reward' ' generate' 
241+ INFO 02-18 08:50:12 llm_engine.py:232] Initializing a V0 LLM engine (v0.7.1) with config: model='./Qwen2.5-0.5B-Instruct', speculative_config=None, tokenizer='./Qwen2.5-0.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=npu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=./Qwen2.5-0.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=False,
242+ Loading safetensors checkpoint shards:   0% Completed |  0/1 [00:00< ? , ? it/s]
243+ Loading safetensors checkpoint shards: 100% Completed |  1/1 [00:00< 00:00,  5.86it/s]
244+ Loading safetensors checkpoint shards: 100% Completed |  1/1 [00:00< 00:00,  5.85it/s]
245+ INFO 02-18 08:50:24 executor_base.py:108] #  CPU blocks: 35064, # CPU blocks: 2730
246+ INFO 02-18 08:50:24 executor_base.py:113] Maximum concurrency for  32768 tokens per request: 136.97x
247+ INFO 02-18 08:50:25 llm_engine.py:429] init engine (profile, create kv cache, warmup model) took 3.87 seconds
248+ Processed prompts: 100%| █████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|  4/4 [00:00< 00:00,  8.46it/s, est. speed input: 46.55 toks/s, output: 135.41 toks/s]
249+ Prompt: ' Hello, my name is' "  Shinji, a teenage boy from New York City. I'm a computer science" 
250+ Prompt: ' The president of the United States is' '  a very important person. When he or she is elected, many people think that' 
251+ Prompt: ' The capital of France is' '  Paris. The oldest part of the city is Saint-Germain-des-Pr' 
252+ Prompt: ' The future of AI is' '  not bright\n\nThere is no doubt that the evolution of AI will have a huge' 
229253``` 
0 commit comments