Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Apr 16, 2025

  • Add a link from troubleshooting guide to a comprehensive list of options that can avoid OOM
  • Add sections about CUDA Graph and Multi-modal input limits to avoid OOM

FIX #15664
FIX #16551
FIX #16570

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 16, 2025
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: DarkLight1337 <[email protected]>
Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, only issue is the DCO check

@DarkLight1337 DarkLight1337 merged commit facbe2a into vllm-project:main Apr 16, 2025
23 checks passed
@DarkLight1337 DarkLight1337 deleted the oom-troubleshooting branch April 16, 2025 15:18
@darkness8i8
Copy link

when I tried the latest vllm install with llama 3.1 8b now I got: @DarkLight1337

in <cell line: 0>()
7
8
----> 9 llm = LLM(model=BASE_MODEL_NAME)
10

10 frames
/usr/local/lib/python3.11/dist-packages/vllm/utils.py in inner(*args, **kwargs)
1097 )
1098
-> 1099 return fn(*args, **kwargs)
1100
1101 return inner # type: ignore

/usr/local/lib/python3.11/dist-packages/vllm/entrypoints/llm.py in init(self, model, tokenizer, tokenizer_mode, skip_tokenizer_init, trust_remote_code, allowed_local_media_path, tensor_parallel_size, dtype, quantization, revision, tokenizer_revision, seed, gpu_memory_utilization, swap_space, cpu_offload_gb, enforce_eager, max_seq_len_to_capture, disable_custom_all_reduce, disable_async_output_proc, hf_token, hf_overrides, mm_processor_kwargs, task, override_pooler_config, compilation_config, **kwargs)
246
247 # Create the Engine (autoselects V0 vs V1)
--> 248 self.llm_engine = LLMEngine.from_engine_args(
249 engine_args=engine_args, usage_context=UsageContext.LLM_CLASS)
250 self.engine_class = type(self.llm_engine)

/usr/local/lib/python3.11/dist-packages/vllm/engine/llm_engine.py in from_engine_args(cls, engine_args, usage_context, stat_loggers)
520 engine_cls = V1LLMEngine
521
--> 522 return engine_cls.from_vllm_config(
523 vllm_config=vllm_config,
524 usage_context=usage_context,

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/llm_engine.py in from_vllm_config(cls, vllm_config, usage_context, stat_loggers, disable_log_stats)
113 "Set VLLM_USE_V1=0 and file and issue on Github.")
114
--> 115 return cls(vllm_config=vllm_config,
116 executor_class=Executor.get_class(vllm_config),
117 log_stats=(not disable_log_stats),

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/llm_engine.py in init(self, vllm_config, executor_class, log_stats, usage_context, stat_loggers, mm_registry, use_cached_outputs, multiprocess_mode)
88
89 # EngineCore (gets EngineCoreRequests and gives EngineCoreOutputs)
---> 90 self.engine_core = EngineCoreClient.make_client(
91 multiprocess_mode=multiprocess_mode,
92 asyncio_mode=False,

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/core_client.py in make_client(multiprocess_mode, asyncio_mode, vllm_config, executor_class, log_stats)
72
73 if multiprocess_mode and not asyncio_mode:
---> 74 return SyncMPClient(vllm_config, executor_class, log_stats)
75
76 return InprocClient(vllm_config, executor_class, log_stats)

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/core_client.py in init(self, vllm_config, executor_class, log_stats)
470 def init(self, vllm_config: VllmConfig, executor_class: type[Executor],
471 log_stats: bool):
--> 472 super().init(
473 asyncio_mode=False,
474 vllm_config=vllm_config,

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/core_client.py in init(self, asyncio_mode, vllm_config, executor_class, log_stats)
402
403 # Wait for engine core process(es) to start.
--> 404 self._wait_for_engine_startup()
405
406 self.utility_results: dict[int, AnyFuture] = {}

/usr/local/lib/python3.11/dist-packages/vllm/v1/engine/core_client.py in _wait_for_engine_startup(self)
408 def _wait_for_engine_startup(self):
409 # Get a sync handle to the socket which can be sync or async.
--> 410 sync_input_socket = zmq.Socket.shadow(self.input_socket)
411
412 # Wait for engine core process(es) to send ready messages.

/usr/local/lib/python3.11/dist-packages/zmq/sugar/socket.py in shadow(cls, address)
166 from zmq.utils.interop import cast_int_addr
167
--> 168 address = cast_int_addr(address)
169 return cls(shadow=address)
170

/usr/local/lib/python3.11/dist-packages/zmq/utils/interop.py in cast_int_addr(n)
27 return int(ffi.cast("size_t", n))
28
---> 29 raise ValueError("Cannot cast %r to int" % n)

ValueError: Cannot cast <zmq.Socket(zmq.ROUTER) at 0x783b0f864360> to int

@njhill
Copy link
Member

njhill commented Apr 16, 2025

@darkness8i8 you need to upgrade your pyzmq version to >= 25.0 (preferably latest 26.4)

@darkness8i8
Copy link

darkness8i8 commented Apr 16, 2025

@njhill sorry I'm a noob lol but shouldn't that be a dependency VLLM installs if it needs that? All I did was %pip install vllm. I did not have a no deps flag or anything.

@njhill
Copy link
Member

njhill commented Apr 16, 2025

@darkness8i8 yes it should have been, and it is now, just the version constraint was missed in the last release.

It also wouldn't be a problem if you didn't already have pyzmq in your env since pip would install the latest. So you must have had a preexisting older version installed (which didn't get upgraded automatically as it should have been).

lionelvillard pushed a commit to lionelvillard/vllm that referenced this pull request Apr 17, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Agata Dobrzyniewicz <[email protected]>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: qwen2.5-vl-72b oom in 4 A100 in 0.8.3 [Bug]: Severe OOM in 0.8.3 (ok in 0.7.2) [Bug]: VLLM 0.8.2 OOM error (No error in 0.7.3 version)

4 participants