-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[https://nvbugs/5489015][fix] Support communicator split in MNNVL allreduce and fix the binding issues. #7387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Shiyu Li <[email protected]>
Signed-off-by: Shiyu Li <[email protected]>
Signed-off-by: Shiyu Li <[email protected]>
Signed-off-by: Shiyu Li <[email protected]>
📝 WalkthroughWalkthroughExpands McastGPUBuffer constructor to add splitColor and device index, replaces global MPI use with per-group communicators (split by splitColor), updates tensor-getter signatures/types, and changes Python/torch bindings and MNNVL AllReduce initialization to use topology-aware ranks. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Py as Python caller
participant Ops as _torch.distributed.ops
participant MPI as MPI Session
participant Group as Group MpiComm
participant Buf as McastGPUBuffer
note over Ops,MPI: Initialize topology-aware MNNVL AllReduce
Py->>Ops: init_allreduce(mapping, dtype)
Ops->>MPI: split(splitColor, groupRank)
MPI-->>Ops: Group (mGroupComm)
Ops->>Group: Barrier()
Ops->>Buf: new(buf_size, tp_size, tp_rank, split_color, device_idx, mn_nvlink)
Buf-->>Ops: instance
Ops-->>Py: AllReduce (uses Buf)
sequenceDiagram
autonumber
participant Buf as McastGPUBuffer
participant Dev as McastDeviceMemory
participant Sess as MPI Session
participant G as Group MpiComm
note over Buf,Dev: Construction delegates to device memory which creates group comm
Buf->>Dev: ctor(bufSize, groupSize, groupRank, splitColor, deviceIdx, mnNvlink)
Dev->>Sess: session().split(splitColor, groupRank)
Sess-->>Dev: mGroupComm (G)
Dev->>G: allgather / bcast for allocations
Dev-->>Buf: ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
/bot run --add-multi-gpu-test |
PR_Github #17047 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
564-566
: Fix Python 3.8 typing: replace PEP 585 generics.This file targets py3.8+, but uses list[int]; that’s 3.9+. Use typing.List[int] or add from future import annotations.
Apply:
- all_rank_num_tokens: Optional[list[int]] = None, + all_rank_num_tokens: Optional[List[int]] = None,And similarly at Line 999. Optionally add at file top:
+from __future__ import annotations
Also applies to: 999-1001
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp (2)
24-27
: Missing standard headers for used symbols.memcpy and std::set are used below but and aren’t included in this TU.
#include <cstdint> +#include <cstring> +#include <set> #include <cuda_runtime_api.h>
158-169
: Pointer arithmetic bug in allgather/memcpy (writes past buffer).Using exphndl + mGroupRank * sizeof(CUmemFabricHandle) scales twice. This corrupts memory and breaks allgather.
Apply:
- memcpy(exphndl + mGroupRank * sizeof(CUmemFabricHandle), &myhndl, sizeof(CUmemFabricHandle)); - mGroupComm.allgather( - exphndl + mGroupRank * sizeof(CUmemFabricHandle), exphndl, sizeof(CUmemFabricHandle), mpi::MpiType::kCHAR); + memcpy(&exphndl[mGroupRank], &myhndl, sizeof(CUmemFabricHandle)); + mGroupComm.allgather(&exphndl[mGroupRank], exphndl, sizeof(CUmemFabricHandle), mpi::MpiType::kCHAR);
🧹 Nitpick comments (9)
cpp/tensorrt_llm/runtime/mcastDeviceMemory.h (1)
32-36
: Doc fix: mnNvlink path description is inverted.Comment says “uses IPC if mnNvlink is true, otherwise fabric,” but code uses fabric (CU_MEM_HANDLE_TYPE_FABRIC) when mnNvlink is true and NVLS IPC otherwise.
-//! This class uses IPC-based allocation if mnNvlink is true, otherwise it uses fabric allocation. +//! When mnNvlink is true (multi-node), uses fabric allocation; otherwise, uses NVLS IPC for intra-node.cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp (1)
66-72
: Log label nit: clarify mnNvlink meaning.The log prints “isMultiNode” but uses mIsMNNvlink. Rename label to avoid confusion.
- "[McastDeviceMemory] World Rank: %u, Group Rank: %u, Group size: %u, GroupSplitColor: %u, isMultiNode: %d, " + "[McastDeviceMemory] World Rank: %u, Group Rank: %u, Group size: %u, GroupSplitColor: %u, mn_nvlink: %d, "tensorrt_llm/_torch/distributed/ops.py (5)
10-10
: Preserve module namespace in imports.Import the module and use the qualified name, per repo guidelines.
Apply:
-from tensorrt_llm.bindings.internal.runtime import McastGPUBuffer +from tensorrt_llm.bindings.internal import runtime as trtllm_runtime @@ - mcast_buffer = McastGPUBuffer( + mcast_buffer = trtllm_runtime.McastGPUBuffer(Also applies to: 78-86
65-65
: Remove unused env flag.force_mn is unused; drop it to avoid confusion.
Apply:
- force_mn = os.environ.get("TRTLLM_FORCE_MNNVL_AR", "0") == "1"
If this is kept for future usage, add a TODO explaining intended behavior. If removed, also drop the now-unused os import.
100-103
: Use the input tensor’s device for buffer_flags.This avoids divergence if local_rank ≠ current CUDA device.
Apply:
- buffer_flags = torch.tensor([0, 2, 0, 0], - dtype=torch.uint32, - device=torch.device("cuda", - mapping.local_rank)) + buffer_flags = torch.tensor([0, 2, 0, 0], + dtype=torch.uint32, + device=input.device)
307-311
: Fix assert message formatting.Message currently shows literal braces; use f-string.
Apply:
- ), "MNNVL all reduce only supports dtype {MNNVLAllReduce.get_supported_dtypes()} and without cp." + ), f"MNNVL AllReduce only supports dtypes {MNNVLAllReduce.get_supported_dtypes()} and requires no CP."
1-1
: Add NVIDIA copyright header.This file lacks the standard header required by the repo’s guidelines.
Apply:
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.cpp/tensorrt_llm/runtime/mcastGPUBuffer.h (2)
37-40
: Doc param mismatch (“device” vs “deviceIdx”).Update the comment to reflect the new parameter name/type.
Apply:
- //! \param device The CUDA device for buffer allocation. + //! \param deviceIdx The CUDA device index for buffer allocation.
16-16
: Add include guard per repo guideline.Header guards are required (pragma once is present but guards are mandated).
Apply:
-#pragma once +#pragma once +#ifndef TRTLLM_MCASTGPUBUFFER_H +#define TRTLLM_MCASTGPUBUFFER_H @@ -} // namespace tensorrt_llm::runtime +} // namespace tensorrt_llm::runtime + +#endif // TRTLLM_MCASTGPUBUFFER_HAlso applies to: 98-99
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
cpp/tensorrt_llm/nanobind/runtime/bindings.cpp
(1 hunks)cpp/tensorrt_llm/pybind/runtime/bindings.cpp
(1 hunks)cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp
(7 hunks)cpp/tensorrt_llm/runtime/mcastDeviceMemory.h
(3 hunks)cpp/tensorrt_llm/runtime/mcastGPUBuffer.h
(4 hunks)tensorrt_llm/_torch/distributed/ops.py
(5 hunks)tensorrt_llm/_torch/models/modeling_deepseekv3.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh}
: In C++ and CUDA files, closing braces of namespaces must include a trailing comment naming the namespace (e.g., } // namespace foo)
Prefer const or constexpr variables over #define for constants; variables not modified after initialization must be declared const
Avoid magic literals: except 0, nullptr, true, false, use named constants (e.g., constexpr) instead of inline numeric or string literals
Use Allman brace style; always brace bodies of if/else/switch/while/do/for; put the semicolon of empty loops on a new line
C++ filenames should be camelCase starting lowercase (e.g., thisIsAFilename.cpp) and case-insensitive unique within a build target
Type names are UpperCamelCase; local variables, methods, and namespaces are lowerCamelCase
Global non-magic-number variables: prefix g for non-static globals and s for static or anonymous-namespace globals (e.g., gFoo, sBar)
Locally visible static variables should be lowerCamelCase starting with 's' (e.g., static std::once_flag sFlag)
Member variables use mPrefix (e.g., mCount); public members may omit but using m is encouraged for clarity
Constants (enums, globals, static constants, and function-scope magic-number constants) use uppercase SNAKE_CASE with k prefix (e.g., kMAX_SIZE)
Avoid macros; if necessary, use UPPER_SNAKE_CASE for macro names
Run clang-format (LLVM style) before submitting; maximum line length is 120; use clang-format off/on only for justified exceptions
Use C++ comments (//); C-style /* / only for special inline cases; prefer Doxygen comments: //! and //!<; full-sentence comments are capitalized and punctuated; document public APIs with Doxygen
Disable code with #if/#endif (possibly via a DEBUG macro); do not comment out code; avoid dead code blocks
Do not throw exceptions across library boundaries
Use the least-forceful cast; avoid C-style and functional casts (except void casts); do not remove const/volatile; void to T* via static_cast; reinterpret_cast only a...
Files:
cpp/tensorrt_llm/runtime/mcastDeviceMemory.h
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h
cpp/tensorrt_llm/pybind/runtime/bindings.cpp
cpp/tensorrt_llm/nanobind/runtime/bindings.cpp
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp
**/*.{hpp,h,hxx,hh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Header include guards are required with macro name TRTLLM_<FILE_NAME_IN_CAPS> (no directories, no leading/trailing underscores)
Files:
cpp/tensorrt_llm/runtime/mcastDeviceMemory.h
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}
: Use spaces only; no tabs; indent with 4 spaces
Prepend NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
cpp/tensorrt_llm/runtime/mcastDeviceMemory.h
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h
tensorrt_llm/_torch/models/modeling_deepseekv3.py
cpp/tensorrt_llm/pybind/runtime/bindings.cpp
tensorrt_llm/_torch/distributed/ops.py
cpp/tensorrt_llm/nanobind/runtime/bindings.cpp
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Python code must target Python 3.8+
Indent Python with 4 spaces; no tabs
Preserve module namespaces when importing: from package.subpackage import foo; then call foo.SomeClass() instead of importing the class directly
Python naming: files snake_case; classes PascalCase; functions/methods snake_case; locals snake_case (prefix k_ when starting with a number); globals UPPER_SNAKE_CASE with G_ prefix; constants UPPER_SNAKE_CASE
Avoid shadowing outer-scope variables; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; limit comments to function-internal or file-local interfaces
Use Google-style docstrings for classes and functions; document attributes/variables inline so Sphinx can render them
Avoid reflection when simpler alternatives exist; prefer explicit parameters and return dicts over locals()/dynamic tricks
In try/except, catch the narrowest exceptions possible; keep try bodies minimal and use else for the main logic when doing duck-typing checks
Files:
tensorrt_llm/_torch/models/modeling_deepseekv3.py
tensorrt_llm/_torch/distributed/ops.py
**/*.{cpp,cc,cxx,cu}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,cc,cxx,cu}
: Prefer smart pointers; use std::unique_ptr for sole ownership, std::shared_ptr for shared ownership; avoid deprecated smart pointers
Do not use assignment in subexpressions (e.g., if (x = y)); avoid chained assignments (x = y = z)
Switch statements: provide cases for all enum values and omit default to catch new values; prohibit fall-through except between empty labels; terminate each case with break or throw; do not end a case with return; place break inside braces when using a compound statement
Files:
cpp/tensorrt_llm/pybind/runtime/bindings.cpp
cpp/tensorrt_llm/nanobind/runtime/bindings.cpp
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp
🧠 Learnings (3)
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Applied to files:
tensorrt_llm/_torch/models/modeling_deepseekv3.py
tensorrt_llm/_torch/distributed/ops.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/models/modeling_deepseekv3.py
📚 Learning: 2025-08-29T08:49:23.221Z
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-29T08:49:23.221Z
Learning: Applies to **/*.cuh : CUDA headers (.cuh) should also follow the project include-guard convention TRTLLM_<FILE_NAME_IN_CAPS>
Applied to files:
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp
🧬 Code graph analysis (3)
cpp/tensorrt_llm/runtime/mcastDeviceMemory.h (1)
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp (2)
McastDeviceMemory
(40-105)McastDeviceMemory
(107-130)
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h (2)
cpp/tensorrt_llm/runtime/torchUtils.h (3)
Device
(72-138)DeviceType
(117-127)ScalarType
(82-97)cpp/tensorrt_llm/nanobind/common/customCasters.h (1)
at
(241-260)
tensorrt_llm/_torch/distributed/ops.py (3)
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h (1)
tensorrt_llm
(21-98)cpp/tensorrt_llm/runtime/mcastDeviceMemory.h (1)
tensorrt_llm
(27-122)tensorrt_llm/_utils.py (3)
mpi_comm
(475-476)dtype
(935-936)dtype
(943-953)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (11)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
748-755
: Inter-node TP gating logic LGTM; matches PR intent.The gcd-with-gpus_per_node clamp prevents inter-node TP for MLP as intended for DSv3. Please ensure Mapping.gpus_per_node > 0 at runtime.
If there’s any path where gpus_per_node could be unset/0, guard it:
- if tp > self.mapping.gpus_per_node: + if self.mapping.gpus_per_node and tp > self.mapping.gpus_per_node:cpp/tensorrt_llm/runtime/mcastDeviceMemory.h (3)
20-20
: Include dependency is correct.Adding mpiUtils.h here is necessary for MpiComm in the header.
99-100
: Storing the group communicator in the class is appropriate.Member initialization aligns with cpp constructor.
46-48
: Constructor signature change verified – all call sites inmcastGPUBuffer.h
and the Python ops (ops.py
) use the new (bufSize, groupSize, groupRank, splitColor, deviceIdx, mnNvlink) order.cpp/tensorrt_llm/pybind/runtime/bindings.cpp (1)
436-440
: Confirm Python call site updated with six arguments
The Python binding intensorrt_llm/_torch/distributed/ops.py
must be updated to pass all six parameters (buf_size, group_size, group_rank, split_color, device_idx, mn_nvlink). Ensure the call around line 82 includessplit_color
,device_idx
, andmn_nvlink
.cpp/tensorrt_llm/nanobind/runtime/bindings.cpp (1)
342-346
: Nanobind ctor signature LGTM; resolves PyTorch-type binding issue.Removing at::Device from the API avoids nanobind limitations and aligns with pybind.
cpp/tensorrt_llm/runtime/mcastDeviceMemory.cpp (1)
212-219
: World-rank set derivation LGTM; matches per-group communicator design.Using getWorldRanks(mGroupComm) avoids assuming contiguous ranks; good for split communicator.
tensorrt_llm/_torch/distributed/ops.py (2)
83-85
: Verify device index source.Using mapping.local_rank as deviceIdx assumes local_rank == CUDA device index across launchers/MIG. If that assumption can break, prefer torch.cuda.current_device() (or a Mapping device index) to avoid mismatch.
Want me to scan the repo for a canonical “device index” source and generate a minimal patch to use it here and in buffer_flags below?
465-476
: MNNVL enable/disable flow looks good.Reasonable gating, try/except, and clear debug logs with clean fallback.
cpp/tensorrt_llm/runtime/mcastGPUBuffer.h (2)
54-68
: Tensor view creation path LGTM.Bounds checks, element size computation, and the for_blob builder with target_device are correct and match the new constructor semantics.
Also applies to: 75-89
54-55
: Confirm Python binding method names.Python calls get_uc_buffer/get_mc_buffer; C++ exposes getUCBuffer/getMCBuffer. Ensure the binding exports the snake_case aliases, or adjust Python to the CamelCase names.
I can generate a quick grep script to locate the binding definitions and verify the exported method names if helpful.
PR_Github #17047 [ run ] completed with state |
/bot run --add-multi-gpu-test |
PR_Github #17282 [ run ] triggered by Bot |
PR_Github #17282 [ run ] completed with state |
Signed-off-by: Shiyu Li <[email protected]>
/bot run --add-multi-gpu-test |
PR_Github #17288 [ run ] triggered by Bot |
e1e212e
to
31999d0
Compare
PR_Github #17288 [ run ] completed with state |
Maybe we can firstly address the comments in the other PR and merge that one firstly, then this one becomes a pure cherry-picking one. |
Summary by CodeRabbit
New Features
Refactor
Bug Fixes
Compatibility Notes
Description
This PR is a follow up to 6886, which was a temporary fix for hanging issue. This PR addressed.
_compute_mlp_tp_size
function to avoid NCCL being used across nodes.Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.