-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Description
Your current environment
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.14.2.el8_2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.104.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] triton==2.2.0
[pip3] vllm-nccl-cu12==2.18.1.0.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] torch 2.2.1 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
[conda] vllm-nccl-cu12 2.18.1.0.2.0 pypi_0 pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV8 NV8 NV8 SYS SYS SYS SYS SYS SYS SYS PXB NODE 32-63,96-127 1 N/A
GPU1 NV8 X NV8 NV8 SYS SYS SYS SYS SYS SYS SYS PXB NODE 32-63,96-127 1 N/A
GPU2 NV8 NV8 X NV8 SYS SYS SYS SYS SYS SYS SYS NODE PXB 32-63,96-127 1 N/A
GPU3 NV8 NV8 NV8 X SYS SYS SYS SYS SYS SYS SYS NODE PXB 32-63,96-127 1 N/A
NIC0 SYS SYS SYS SYS X PIX PIX NODE NODE NODE NODE SYS SYS
NIC1 SYS SYS SYS SYS PIX X PIX NODE NODE NODE NODE SYS SYS
NIC2 SYS SYS SYS SYS PIX PIX X NODE NODE NODE NODE SYS SYS
NIC3 SYS SYS SYS SYS NODE NODE NODE X PIX NODE NODE SYS SYS
NIC4 SYS SYS SYS SYS NODE NODE NODE PIX X NODE NODE SYS SYS
NIC5 SYS SYS SYS SYS NODE NODE NODE NODE NODE X PIX SYS SYS
NIC6 SYS SYS SYS SYS NODE NODE NODE NODE NODE PIX X SYS SYS
NIC7 PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS X NODE
NIC8 NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
🐛 Describe the bug
I'm doing this via python -m vllm.entrypoints.openai.api_server --port 7801 --host 0.0.0.0 --model /mnt/model/llama3-70B-instruct --served-model-name vllm_llama3_70B_instruct --tensor-parallel- size 4 --trust-remote-code After deploying llama3, we started to perform performance tests on the model concurrently. The model could reply normally at the beginning, but about 10 minutes into the test, vllm reported the following error (I installed vllm through the source code 0.4.1):
`
RROR 04-23 16:19:04 async_llm_engine.py:499] Engine iteration timed out. This should never happen!
ERROR 04-23 16:19:04 async_llm_engine.py:43] Engine background task failed
ERROR 04-23 16:19:04 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
ERROR 04-23 16:19:04 async_llm_engine.py:43] request_outputs = await self.engine.step_async()
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
ERROR 04-23 16:19:04 async_llm_engine.py:43] output = await self.model_executor.execute_model_async(
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 424, in execute_model_async
ERROR 04-23 16:19:04 async_llm_engine.py:43] all_outputs = await self._run_workers_async(
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 414, in _run_workers_async
ERROR 04-23 16:19:04 async_llm_engine.py:43] all_outputs = await asyncio.gather(*coros)
ERROR 04-23 16:19:04 async_llm_engine.py:43] asyncio.exceptions.CancelledError
ERROR 04-23 16:19:04 async_llm_engine.py:43]
ERROR 04-23 16:19:04 async_llm_engine.py:43] During handling of the above exception, another exception occurred:
ERROR 04-23 16:19:04 async_llm_engine.py:43]
ERROR 04-23 16:19:04 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
ERROR 04-23 16:19:04 async_llm_engine.py:43] return fut.result()
ERROR 04-23 16:19:04 async_llm_engine.py:43] asyncio.exceptions.CancelledError
ERROR 04-23 16:19:04 async_llm_engine.py:43]
ERROR 04-23 16:19:04 async_llm_engine.py:43] The above exception was the direct cause of the following exception:
ERROR 04-23 16:19:04 async_llm_engine.py:43]
ERROR 04-23 16:19:04 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
ERROR 04-23 16:19:04 async_llm_engine.py:43] task.result()
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
ERROR 04-23 16:19:04 async_llm_engine.py:43] has_requests_in_progress = await asyncio.wait_for(
ERROR 04-23 16:19:04 async_llm_engine.py:43] File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
ERROR 04-23 16:19:04 async_llm_engine.py:43] raise exceptions.TimeoutError() from exc
ERROR 04-23 16:19:04 async_llm_engine.py:43] asyncio.exceptions.TimeoutError
ERROR:asyncio:Exception in callback functools.partial(<function _raise_exception_on_finish at 0x7f4124ad08b0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f412c39bac0>>)
handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x7f4124ad08b0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f412c39bac0>>)>
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
request_outputs = await self.engine.step_async()
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
output = await self.model_executor.execute_model_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 424, in execute_model_async
all_outputs = await self._run_workers_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 414, in _run_workers_async
all_outputs = await asyncio.gather(*coros)
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
task.result()
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
has_requests_in_progress = await asyncio.wait_for(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 45, in _raise_exception_on_finish
raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
INFO 04-23 16:19:04 async_llm_engine.py:154] Aborted request cmpl-cb9ae6d5b74b48a28f23d9f4c323a104.
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
request_outputs = await self.engine.step_async()
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
output = await self.model_executor.execute_model_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 424, in execute_model_async
all_outputs = await self._run_workers_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 414, in _run_workers_async
all_outputs = await asyncio.gather(*coros)
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call
raise exc
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call
await self.app(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/routing.py", line 758, in call
await self.middleware_stack(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/fastapi/routing.py", line 299, in app
raise e
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 89, in create_chat_completion
generator = await openai_serving_chat.create_chat_completion(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 95, in create_chat_completion
return await self.chat_completion_full_generator(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 258, in chat_completion_full_generator
async for res in result_generator:
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 661, in generate
raise e
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 655, in generate
async for request_output in stream:
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 77, in anext
raise result
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
task.result()
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
has_requests_in_progress = await asyncio.wait_for(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
INFO 04-23 16:19:04 async_llm_engine.py:154] Aborted request cmpl-ca47d0961c59407ab90792f513566cbd.
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
request_outputs = await self.engine.step_async()
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
output = await self.model_executor.execute_model_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 424, in execute_model_async
all_outputs = await self._run_workers_async(
File "/usr/local/miniconda3/envs/vllm_llama3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 414, in _run_workers_async
all_outputs = await asyncio.gather(*coros)
asyncio.exceptions.CancelledError
`
How should I solve this problem? ?