Skip to content

Conversation

DomBrown
Copy link
Collaborator

@DomBrown DomBrown commented Jul 4, 2025

Description

Adds autotune support for fp4 blockscale MoE.
The changes to the test are an attempt to avoid unnecessary duplication of test cases in CI. To that end, test everything with autotune by default. Add one non-autotune case to ensure default/fallback config selection still works.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@DomBrown DomBrown self-assigned this Jul 4, 2025
@DomBrown DomBrown force-pushed the dev/autotune_fp4_blockscale_moe branch from 49a62ab to aa849a3 Compare July 7, 2025 16:32
@DomBrown DomBrown requested review from nekorobov, hyukn and Copilot July 7, 2025 16:34
@DomBrown DomBrown marked this pull request as ready for review July 7, 2025 16:34
@DomBrown DomBrown requested a review from a team as a code owner July 7, 2025 16:34
@DomBrown DomBrown requested a review from lucaslie July 7, 2025 16:34
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds support for autotuning the FP4 block-scale Mixture-of-Experts kernel in the PyTorch workflow, along with corresponding tests.

  • Introduce TestMoeFp4 to cover FP4 MoE with and without autotune.
  • Implement FP4BlockScaleMoERunner in Python for custom-op registration and conform to the TunableRunner interface.
  • Refactor the C++ kernel entrypoint to a class-based runner and expose it for the autotuner.

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
tests/unittest/_torch/thop/test_moe.py Add TestMoeFp4 with test_autotune and test_no_autotune
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py Add FP4BlockScaleMoERunner, data class, and custom-op hook
cpp/tensorrt_llm/thop/fp4BlockScaleMoe.cpp Rename function, wrap in FP4BlockScaleMoERunner class
Comments suppressed due to low confidence (3)

tests/unittest/_torch/thop/test_moe.py:972

  • Ensure that the autotune context manager is imported in this test file; otherwise, this line will raise a NameError at runtime.
        with autotune(use_autotune):

cpp/tensorrt_llm/thop/fp4BlockScaleMoe.cpp:28

  • This references the FP8 namespace (trtllmGenFp8BlockScaleMoe) instead of the FP4 one; it should use the FP4 kernel runner to avoid linking the wrong implementation.
using MoeRunnerType = tensorrt_llm::kernels::trtllmGenFp8BlockScaleMoe::MoE::Runner;

tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:79

  • The cache key omits n_group and topk_group, which also influence valid tactics; include these fields in __hash__ and __eq__ to prevent incorrect cache hits.
        ))

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11171 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11171 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8263 completed with status: 'FAILURE'

…ommits

Signed-off-by: Dom Brown <[email protected]>

Add new C++ wrapper runner and use that instead

Signed-off-by: Dom Brown <[email protected]>

Using new python runner

Signed-off-by: Dom Brown <[email protected]>

Adds autotune

Signed-off-by: Dom Brown <[email protected]>

Ensure cache key reuse

Signed-off-by: Dom Brown <[email protected]>

Structure tests such that all are autotune by default, run one case of non-autotune to test fallback tactic selection

Signed-off-by: Dom Brown <[email protected]>
@DomBrown DomBrown force-pushed the dev/autotune_fp4_blockscale_moe branch from aa849a3 to dfb14d2 Compare July 7, 2025 19:35
@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11178 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11178 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8269 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@DomBrown DomBrown merged commit 3e3b176 into NVIDIA:main Jul 9, 2025
3 checks passed
@DomBrown DomBrown deleted the dev/autotune_fp4_blockscale_moe branch July 9, 2025 07:22
zhou-yuxin pushed a commit to zhou-yuxin/TensorRT-LLM that referenced this pull request Jul 15, 2025
…torch workflow kernel autotuner (NVIDIA#5764)

Signed-off-by: Dom Brown <[email protected]>
Signed-off-by: Yuxin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants