Skip to content

Commit d93d2d7

Browse files
authored
[XPU] Make pp group initilized for pipeline-parallelism (#11648)
Signed-off-by: yisheng <[email protected]>
1 parent d0169e1 commit d93d2d7

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

vllm/worker/xpu_worker.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
from vllm.config import VllmConfig
1212
from vllm.distributed import (ensure_model_parallel_initialized,
1313
init_distributed_environment)
14+
from vllm.distributed.parallel_state import get_pp_group
1415
from vllm.logger import init_logger
1516
from vllm.model_executor import set_random_seed
1617
from vllm.platforms import current_platform
@@ -176,3 +177,8 @@ def init_worker_distributed_environment(self) -> None:
176177
parallel_config.pipeline_parallel_size)
177178
# global all_reduce needed for overall oneccl warm up
178179
torch.distributed.all_reduce(torch.zeros(1).xpu())
180+
181+
if parallel_config.pipeline_parallel_size > 1:
182+
# Add pp group init to avoid
183+
# p2p communication as the first call
184+
get_pp_group().all_reduce(torch.zeros(1).xpu())

0 commit comments

Comments
 (0)