-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[TRTLLM-5881] feat: Integrate TRT-LLM Gen FP4 block scale MoE with Pytorch workflow kernel autotuner #5764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
49a62ab
to
aa849a3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds support for autotuning the FP4 block-scale Mixture-of-Experts kernel in the PyTorch workflow, along with corresponding tests.
- Introduce
TestMoeFp4
to cover FP4 MoE with and without autotune. - Implement
FP4BlockScaleMoERunner
in Python for custom-op registration and conform to theTunableRunner
interface. - Refactor the C++ kernel entrypoint to a class-based runner and expose it for the autotuner.
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
tests/unittest/_torch/thop/test_moe.py | Add TestMoeFp4 with test_autotune and test_no_autotune |
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py | Add FP4BlockScaleMoERunner , data class, and custom-op hook |
cpp/tensorrt_llm/thop/fp4BlockScaleMoe.cpp | Rename function, wrap in FP4BlockScaleMoERunner class |
Comments suppressed due to low confidence (3)
tests/unittest/_torch/thop/test_moe.py:972
- Ensure that the
autotune
context manager is imported in this test file; otherwise, this line will raise aNameError
at runtime.
with autotune(use_autotune):
cpp/tensorrt_llm/thop/fp4BlockScaleMoe.cpp:28
- This references the FP8 namespace (
trtllmGenFp8BlockScaleMoe
) instead of the FP4 one; it should use the FP4 kernel runner to avoid linking the wrong implementation.
using MoeRunnerType = tensorrt_llm::kernels::trtllmGenFp8BlockScaleMoe::MoE::Runner;
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:79
- The cache key omits
n_group
andtopk_group
, which also influence valid tactics; include these fields in__hash__
and__eq__
to prevent incorrect cache hits.
))
/bot run |
PR_Github #11171 [ run ] triggered by Bot |
PR_Github #11171 [ run ] completed with state |
…ommits Signed-off-by: Dom Brown <[email protected]> Add new C++ wrapper runner and use that instead Signed-off-by: Dom Brown <[email protected]> Using new python runner Signed-off-by: Dom Brown <[email protected]> Adds autotune Signed-off-by: Dom Brown <[email protected]> Ensure cache key reuse Signed-off-by: Dom Brown <[email protected]> Structure tests such that all are autotune by default, run one case of non-autotune to test fallback tactic selection Signed-off-by: Dom Brown <[email protected]>
aa849a3
to
dfb14d2
Compare
/bot run |
PR_Github #11178 [ run ] triggered by Bot |
PR_Github #11178 [ run ] completed with state |
…torch workflow kernel autotuner (NVIDIA#5764) Signed-off-by: Dom Brown <[email protected]> Signed-off-by: Yuxin <[email protected]>
Description
Adds autotune support for fp4 blockscale MoE.
The changes to the test are an attempt to avoid unnecessary duplication of test cases in CI. To that end, test everything with autotune by default. Add one non-autotune case to ensure default/fallback config selection still works.
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]
Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.