-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Minor] Change warning->warning_once in preprocess #26455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Zhuohan Li <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to reduce log spam by switching to logger.warning_once
. While the intent is good and consistent with other parts of the code, the underlying warning_once
implementation is flawed and will cause the application to crash with a TypeError
. I've left detailed comments explaining the issue and suggesting a reversion to logger.warning
to prevent these crashes. This is a critical issue that should be addressed before merging.
def get_bos_token_id(self) -> Optional[int]: | ||
if self.tokenizer is None: | ||
logger.warning( | ||
logger.warning_once( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change aligns with other parts of the file, logger.warning_once
will unfortunately cause a runtime TypeError
. The underlying implementation in vllm/logger.py
uses @lru_cache
on a function that takes the logger instance as its first argument. Since logger instances are not hashable, this will cause a crash when this warning is triggered.
This change, intended to reduce log spam, introduces a crash. The *_once
logging functionality appears to be broken throughout the codebase and needs to be fixed centrally in vllm/logger.py
.
To avoid introducing a crash, I recommend reverting this to logger.warning
until the underlying issue with warning_once
is resolved.
logger.warning_once( | |
logger.warning( |
def get_eos_token_id(self) -> Optional[int]: | ||
if self.tokenizer is None: | ||
logger.warning( | ||
logger.warning_once( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change aligns with other parts of the file, logger.warning_once
will unfortunately cause a runtime TypeError
. The underlying implementation in vllm/logger.py
uses @lru_cache
on a function that takes the logger instance as its first argument. Since logger instances are not hashable, this will cause a crash when this warning is triggered.
This change, intended to reduce log spam, introduces a crash. The *_once
logging functionality appears to be broken throughout the codebase and needs to be fixed centrally in vllm/logger.py
.
To avoid introducing a crash, I recommend reverting this to logger.warning
until the underlying issue with warning_once
is resolved.
logger.warning_once( | |
logger.warning( |
…to loader * 'loader' of https://github.com/dsxsteven/vllm_splitPR: (778 commits) [torchao] Add support for ModuleFqnToConfig using regex (vllm-project#26001) Add: Support for multiple hidden layers in Eagle3 (vllm-project#26164) Enable `RMSNorm` substitution for Transformers backend (vllm-project#26353) [Model] Gemma3: Fix GGUF loading and quantization (vllm-project#26189) Bump Flashinfer to v0.4.0 (vllm-project#26326) Update Dockerfile and install runai-model-streamer[gcs] package (vllm-project#26464) [Core] Relax the LoRA max rank (vllm-project#26461) [CI/Build] Fix model nightly tests (vllm-project#26466) [Hybrid]: Decouple Kernel Block Size from KV Page Size (vllm-project#24486) [Core][KVConnector] Propagate all tokens on resumed preemptions (vllm-project#24926) [MM][Doc] Add documentation for configurable mm profiling (vllm-project#26200) [Hardware][AMD] Enable FlexAttention backend on ROCm (vllm-project#26439) [Bugfix] Incorrect another MM data format in vllm bench throughput (vllm-project#26462) [Bugfix] Catch and log invalid token ids in detokenizer #2 (vllm-project#26445) [Minor] Change warning->warning_once in preprocess (vllm-project#26455) [Bugfix] Set the minimum python version for gpt-oss (vllm-project#26392) [Misc] Redact ray runtime env before logging (vllm-project#26302) Separate MLAAttention class from Attention (vllm-project#25103) [Attention] Register FLASHMLA_SPARSE (vllm-project#26441) [Kernels] Modular kernel refactor (vllm-project#24812) ...
Signed-off-by: Zhuohan Li <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
Purpose
Reduce log spamming by changing warning -> warning_once similar to rest of the file.
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.