Skip to content

Conversation

@littledgg
Copy link
Contributor

Motivation

Under CUDA Graph capture, CUB’s workspace size query for radix sort in MoE dispatch may return 0 even when num_items > 0. This makes the allocated workspace size 0, and the subsequent self-check in run() fails with:

[Error][CubKeyValueSorter::run]
Error. The allocated workspace is too small to run this problem.
Expected workspace size of at least 1 but got problem size 0

This PR adds a defensive guard so that the first (size-query) call never produces a 0-size workspace in this scenario, preventing the error.

Modifications

In custom_ops/gpu_ops/moe/moe_dispatch.cu, we call sorter_.getWorkspaceSize(moe_topk * num_rows) (line:77)before allocating the CUB radix sort workspace. We observed that, when CUDA Graph is enabled, temp_storage_bytes may remain 0 after the first size query, although a subsequent size query (performed inside run() as a sanity check) may report 1.
This behavior is consistent with using CUB’s DeviceRadixSort::SortPairs size query with all data pointers set to nullptr. While this often works, it’s not guaranteed across CUDA/CUB versions or under graph capture; some paths may not write temp_storage_bytes in that case.

Neither enqueuing getWorkspaceSize onto the stream nor passing actual non-null device pointers resolves this issue. At this point, it’s essentially confirmed that cub::DeviceRadixSort::SortPairs is not writing required_storage to 0; rather, it is not updating required_storage at all.

So we initialize required_storage to 1 to ensure that when SortPairs exhibits the anomalous behavior we observed (which, based on current evidence, only happens in the case where SortPairs should have returned 1) the code path can still proceed safely. It is important to state that when SortPairs behaves normally, the initial value does not matter, because SortPairs will always overwrite required_storage with the correct size.

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Nov 17, 2025

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Nov 17, 2025
@littledgg littledgg marked this pull request as ready for review November 19, 2025 09:24
Copilot AI review requested due to automatic review settings November 19, 2025 09:24
Copilot finished reviewing on behalf of littledgg November 19, 2025 09:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes a bug where CUB's workspace size query returns 0 under CUDA Graph capture in MoE dispatch operations, causing downstream allocation failures. The fix implements a defensive initialization strategy to ensure a minimum workspace size.

  • Initialize required_storage to 1 instead of 0 in CubKeyValueSorter::getWorkspaceSize() to handle cases where CUB doesn't write to the variable under CUDA Graph capture
  • Add documentation explaining the undefined behavior risks of relying on adjacent memory regions
  • Reorder include statements alphabetically for consistency

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
custom_ops/gpu_ops/moe/fused_moe_imp_op.h Core fix: initialize workspace size query variable to 1 to handle CUDA Graph capture edge case; alphabetize includes
custom_ops/gpu_ops/moe/moe_dispatch.cu Add documentation comment explaining potential undefined behavior with workspace overflow


size_t getWorkspaceSize(const size_t num_key_value_pairs,
bool descending = false) {
num_key_value_pairs_ = num_key_value_pairs;
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider adding a brief inline comment explaining why required_storage is initialized to 1 instead of 0. This would help future maintainers understand this is a workaround for CUB's behavior under CUDA Graph capture, where cub::DeviceRadixSort::SortPairs may not write to required_storage at all. For example:

// Initialize to 1 as workaround: under CUDA Graph capture, CUB may not write 
// to required_storage, and 1 is the minimum expected size in that scenario.
size_t required_storage = 1;

This makes the defensive guard's purpose immediately clear at the point of initialization.

Suggested change
num_key_value_pairs_ = num_key_value_pairs;
num_key_value_pairs_ = num_key_value_pairs;
// Initialize to 1 as workaround: under CUDA Graph capture, CUB may not write
// to required_storage, and 1 is the minimum expected size in that scenario.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant