-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[V1][Core] Fix memory issue with logits & sampling #14508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
23 commits
Select commit
Hold shift + click to select a range
08e1311
update
ywang96 e93aa14
update
ywang96 08f85bc
Merge branch 'vllm-project:main' into fix-memory
ywang96 1ea2fa2
add note
ywang96 53b99c3
Merge branch 'vllm-project:main' into fix-memory
ywang96 23ab4ce
Merge branch 'main' into fix-memory
ywang96 77946d0
Merge branch 'vllm-project:main' into fix-memory
ywang96 18b6354
remove spec decode
ywang96 a93cb87
Merge branch 'vllm-project:main' into fix-memory
ywang96 58b3a39
Merge branch 'vllm-project:main' into fix-memory
ywang96 1b04868
Merge branch 'vllm-project:main' into fix-memory
ywang96 1f688cb
bypass
ywang96 19e66dc
add fixme
ywang96 1e017e4
add try catch
ywang96 c6d39a5
Merge branch 'main' into fix-memory
ywang96 2bc44fb
add bad_words
ywang96 8062d68
Merge branch 'vllm-project:main' into fix-memory
ywang96 6256bcf
fix
ywang96 e52d22b
Merge branch 'vllm-project:main' into fix-memory
ywang96 82c20b9
Fix capture sizes
ywang96 609d2e8
add assert
ywang96 ba08848
fix
ywang96 6d4f1f9
Merge branch 'vllm-project:main' into fix-memory
ywang96 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please elaborate on this?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the discussion here https://vllm-dev.slack.com/archives/C087WBWC5AQ/p1741398800083509?thread_ts=1741386694.452939&cid=C087WBWC5AQ - TLDR is that empty_cache cannot be called when we turn on sleep mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... Why do we need
empty_cache
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference here is that we never (in both V0 and V1) warmed up sampler, therefore the memory fragmentation issue was always there but not as pronounced in V0 (since the default batch size is 256).
Now we're adding the sampler warmup in V1, but when we call
sleep()
, the memory buffer for logits can't be cleared from the pytorch caching allocator (the bug mentioned in this comment), therefore the memory usage will be a lot higher.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ywang96 Thanks for the explanation. Just want to double check: We don't want to call
empty_cache
anyways, because we intentionally reserve the(max_num_reqs x vocab_size)
-sized tensor in the pytorch allocator, right?Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct though I do think there should be a better & clean fix for this to work with sleep mode in the long term. We should probably free the memory when
sleep
is called, then warm up sampler again withinwakeup
, but this is currently blocked since we can't free the memory anyways.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm,,, How is the logits tensor different from other intermediate activation tensors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why this specific tensor becomes a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because
dummy_run
doesn't include/activate sampler tensors, this is why we madedummy_sampler_run
in the first place.