Skip to content

Conversation

@tdoublep
Copy link
Member

Fixes #6408

This PR changes the precision in tests/samplers/test_logprobs.py from half to float.

This is needed because the test is comparing the actual values of the logprobs against the equivalent outputs from HF. There is precedent established for doing this in other tests (see e.g., here or here).

This change ensures that the test does not fail on an H100 GPU.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only trigger fastcheck CI to run, which consists only a small and essential subset of tests to quickly catch errors with the flexibility to run extra individual tests on top (you can do this by unblocking test steps in the Buildkite run).

Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well.

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@tdoublep
Copy link
Member Author

/ready

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 13, 2024
@simon-mo simon-mo enabled auto-merge (squash) July 13, 2024 04:47
@tdoublep
Copy link
Member Author

tdoublep commented Jul 13, 2024

CI failure looks real - I can reproduce locally on an L4 GPU

triton.runtime.autotuner.OutOfResources: out of resource: shared memory, Required: 132096, Hardware limit: 101376. Reducing block sizes or `num_stages` may help.

Haven't seen that before...

auto-merge was automatically disabled July 13, 2024 17:33

Head branch was pushed to by a user without write access

@tdoublep
Copy link
Member Author

tdoublep commented Jul 13, 2024

I was able to solve the error by halving the number of blocks in the prefix prefill kernel when torch.float32 is used. This should not affect normal runtime behaviour when using torch.float16, but allows us to run CI tests in torch.float32 on L4 GPUs.

Signed-off-by: Thomas Parnell <[email protected]>
@tdoublep
Copy link
Member Author

@simon-mo auto-merge got disabled on this one, but now it is good.

@mgoin mgoin merged commit 4ef95b0 into vllm-project:main Jul 15, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: samplers/test_logprobs.py fail on H100

3 participants