Skip to content

Conversation

@mgoin
Copy link
Member

@mgoin mgoin commented Nov 24, 2025

Purpose

After we landed triton_kernels as a default dep in #28788, we started seeing some issues appear for gpt-oss users on sm110 and sm120 (see #29317 and https://vllm-dev.slack.com/archives/C0990U53QBV/p1764009918189409).
This PR disables that path for now and uses Marlin for those GPUs.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mgoin mgoin changed the title Only use triton_kernels for MXFP4 on SM90 and SM100 [Bugfix] Only use triton_kernels for MXFP4 on SM90 and SM100 Nov 24, 2025
@mgoin mgoin added bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed gpt-oss Related to GPT-OSS models nvidia labels Nov 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly restricts the use of Triton kernels for MXFP4 to SM90 and SM100 architectures, addressing issues on newer GPUs. The logic is sound, but I've pointed out a small redundancy in the conditional check that can be simplified for better code clarity and maintainability.

Signed-off-by: mgoin <[email protected]>
@varun-sundar-rabindranath
Copy link
Contributor

LGTM! Thanks for the fix @mgoin

Copy link
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the work!

@github-project-automation github-project-automation bot moved this to In review in NVIDIA Nov 24, 2025
@github-project-automation github-project-automation bot moved this from To Triage to Ready in gpt-oss Issues & Enhancements Nov 24, 2025
@yewentao256 yewentao256 merged commit c17610e into vllm-project:main Nov 24, 2025
52 checks passed
@yewentao256 yewentao256 deleted the triton-kernels-skip-sm110-sm120 branch November 24, 2025 23:22
@github-project-automation github-project-automation bot moved this from In review to Done in NVIDIA Nov 24, 2025
bringlein pushed a commit to bringlein/vllm that referenced this pull request Nov 26, 2025
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working gpt-oss Related to GPT-OSS models nvidia ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants