Skip to content

Conversation

tongyuantongyu
Copy link
Member

@tongyuantongyu tongyuantongyu commented Aug 7, 2025

Summary by CodeRabbit

  • Refactor
    • Enhanced compatibility with various CUDA binding modules by updating import logic to support alternative import paths across multiple components.
    • Refined type hints and error checks for improved internal consistency.

No changes to user-facing features or functionality. These updates improve internal stability and maintainability.

Description

Fix crash caused by the landed deprecations in cuda-python==13.0.

<frozen importlib._bootstrap_external>:1297: FutureWarning: The cuda.cuda module is deprecated and will be removed in a future release, please switch to use the cuda.bindings.driver module instead.
<frozen importlib._bootstrap_external>:1297: FutureWarning: The cuda.cudart module is deprecated and will be removed in a future release, please switch to use the cuda.bindings.runtime module instead.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@tongyuantongyu tongyuantongyu requested a review from a team as a code owner August 7, 2025 09:38
Copy link
Contributor

coderabbitai bot commented Aug 7, 2025

📝 Walkthrough

Walkthrough

The changes update CUDA-related import statements across several modules to first attempt importing from cuda.bindings submodules (driver and runtime), falling back to previous import paths if unavailable. One function signature and internal type usage are updated to reflect the new import. No other code logic or public API is altered.

Changes

Cohort / File(s) Change Summary
CUDA Import Fallback Logic
tensorrt_llm/_ipc_utils.py, tensorrt_llm/_mnnvl_utils.py, tensorrt_llm/_torch/pyexecutor/py_executor.py, tensorrt_llm/auto_parallel/cluster_info.py, tensorrt_llm/runtime/generation.py, tensorrt_llm/runtime/multimodal_model_runner.py, tests/microbenchmarks/all_reduce.py, tests/microbenchmarks/build_time_benchmark.py, tests/unittest/trt/functional/test_allreduce_norm.py, tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py, tests/unittest/trt/functional/test_nccl.py, tests/unittest/trt/functional/test_pp_reduce_scatter.py
Updated imports to first try cuda.bindings.driver/cuda.bindings.runtime, with fallback to previous cuda/cudart imports.
CUDA Import Fallback Logic for NVRTC
cpp/kernels/fmha_v2/fmha_test.py, tests/unittest/utils/util.py
Updated imports to first try cuda.bindings.driver and cuda.bindings.nvrtc, falling back to cuda and nvrtc imports.
CUDA Driver Import Fallback in Tests
tests/integration/defs/sysinfo/get_sysinfo.py, tests/unittest/_torch/multi_gpu/test_lowprecision_allreduce.py
Changed import of CUDA driver to try cuda.bindings.driver first, fallback to cuda.
Function Signature Update
tensorrt_llm/_ipc_utils.py
Changed _raise_if_error type hint and error comparison from cudaError_t to cudart.cudaError_t.

Sequence Diagram(s)

sequenceDiagram
    participant Module
    participant cuda.bindings
    participant cuda

    Module->>cuda.bindings: import driver/runtime/nvrtc
    alt Import succeeds
        Module->>Module: Use cuda.bindings.driver/runtime/nvrtc
    else Import fails
        Module->>cuda: import cuda/cudart/nvrtc
        Module->>Module: Use cuda/cudart/nvrtc
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Suggested reviewers

  • chzblych
  • litaotju
  • pcastonguay
  • HuiGao-NV

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2ea5b3a and f328e64.

📒 Files selected for processing (9)
  • cpp/kernels/fmha_v2/fmha_test.py (1 hunks)
  • tests/integration/defs/sysinfo/get_sysinfo.py (1 hunks)
  • tests/microbenchmarks/build_time_benchmark.py (1 hunks)
  • tests/unittest/_torch/multi_gpu/test_lowprecision_allreduce.py (1 hunks)
  • tests/unittest/trt/functional/test_allreduce_norm.py (1 hunks)
  • tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py (1 hunks)
  • tests/unittest/trt/functional/test_nccl.py (1 hunks)
  • tests/unittest/trt/functional/test_pp_reduce_scatter.py (1 hunks)
  • tests/unittest/utils/util.py (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • tests/microbenchmarks/build_time_benchmark.py
  • cpp/kernels/fmha_v2/fmha_test.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/trt/functional/test_pp_reduce_scatter.py
  • tests/unittest/utils/util.py
  • tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py
  • tests/unittest/trt/functional/test_nccl.py
  • tests/unittest/_torch/multi_gpu/test_lowprecision_allreduce.py
  • tests/unittest/trt/functional/test_allreduce_norm.py
  • tests/integration/defs/sysinfo/get_sysinfo.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/trt/functional/test_pp_reduce_scatter.py
  • tests/unittest/utils/util.py
  • tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py
  • tests/unittest/trt/functional/test_nccl.py
  • tests/unittest/_torch/multi_gpu/test_lowprecision_allreduce.py
  • tests/unittest/trt/functional/test_allreduce_norm.py
  • tests/integration/defs/sysinfo/get_sysinfo.py
🧠 Learnings (3)
📚 Learning: in tensorrt-llm, test files (files under tests/ directories) do not require nvidia copyright headers...
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/unittest/utils/util.py
  • tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py
  • tests/unittest/trt/functional/test_nccl.py
  • tests/unittest/trt/functional/test_allreduce_norm.py
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/utils/util.py
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tests/unittest/utils/util.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/unittest/trt/functional/test_pp_reduce_scatter.py (1)

24-27: LGTM! Correct migration to new CUDA bindings.

The try-except import pattern correctly handles the migration from deprecated cuda.cudart to cuda.bindings.runtime, with proper fallback for backward compatibility. The alias ensures no changes are needed in the rest of the codebase.

tests/unittest/trt/functional/test_allreduce_norm.py (1)

24-27: LGTM! Consistent migration pattern applied.

The import change correctly implements the migration from deprecated cuda.cudart to cuda.bindings.runtime with appropriate fallback mechanism.

tests/unittest/trt/functional/test_nccl.py (1)

24-27: LGTM! Proper migration implementation.

The try-except import pattern correctly handles the transition to the new CUDA bindings module structure while maintaining backward compatibility.

tests/unittest/trt/functional/test_allreduce_prepost_residual_norm.py (1)

24-27: LGTM! Consistent CUDA bindings migration.

The import modification correctly implements the migration pattern from deprecated cuda.cudart to cuda.bindings.runtime with proper fallback handling.

tests/integration/defs/sysinfo/get_sysinfo.py (1)

28-31: LGTM! Correct CUDA driver API migration.

The try-except import pattern properly handles the migration from deprecated cuda.cuda to cuda.bindings.driver, with appropriate fallback for backward compatibility. The as cuda alias ensures existing code continues to work without modification.

tests/unittest/utils/util.py (1)

13-17: LGTM! Proper migration to new CUDA binding imports.

The try-except import pattern correctly addresses the deprecation warnings by first attempting to import from the new cuda.bindings submodules and falling back to the legacy imports if unavailable. The alias driver as cuda maintains compatibility with existing code.

tests/unittest/_torch/multi_gpu/test_lowprecision_allreduce.py (1)

39-42: LGTM! Consistent CUDA binding import migration.

The try-except pattern correctly migrates from the deprecated cuda.cuda module to cuda.bindings.driver while maintaining backward compatibility. The local scope of the import within the function is appropriate and the alias preserves existing API usage.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tongyuantongyu
Copy link
Member Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
tensorrt_llm/runtime/multimodal_model_runner.py (1)

1-5: Add mandatory NVIDIA copyright header

Production source files must include the current-year NVIDIA copyright banner.
Please prepend the required header before any imports to stay compliant.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+#
+# NVIDIA CORPORATION and its licensors retain all intellectual property
+# and proprietary rights in and to this software, related documentation
+# and any modifications thereto.  Any use, reproduction, disclosure or
+# distribution of this software and related documentation without an
+# express license agreement from NVIDIA CORPORATION is strictly prohibited.
+
 import json
 import os
 import sys
🧹 Nitpick comments (4)
tensorrt_llm/runtime/multimodal_model_runner.py (1)

17-20: Narrow the exception scope when falling back to legacy cuda.cudart

Catching a broad ImportError may mask unrelated import problems inside the
cuda.bindings.runtime package. Restrict the handler to
ModuleNotFoundError so only the absence of the new module triggers the
fallback.

-try:
-    from cuda.bindings import runtime as cudart
-except ImportError:
-    from cuda import cudart
+try:
+    from cuda.bindings import runtime as cudart
+except ModuleNotFoundError:  # fall back when bindings are unavailable
+    from cuda import cudart
tensorrt_llm/_mnnvl_utils.py (1)

25-28: Narrow the exception to ModuleNotFoundError and keep the original traceback for other import-time errors

Catching the broader ImportError masks failures that occur inside cuda.bindings.driver (e.g., missing symbols), silently falling back to the legacy path and making debugging painful.
Switch to ModuleNotFoundError so only the absence of the module triggers the fallback, and optionally log the fallback for traceability.

-try:
-    from cuda.bindings import driver as cuda
-except ImportError:
-    from cuda import cuda
+try:
+    from cuda.bindings import driver as cuda  # New package (cuda-python ≥13)
+except ModuleNotFoundError:
+    # Fallback for older cuda-python versions (<13)
+    from cuda import cuda
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

15-18: Use ModuleNotFoundError for a precise fallback and add a short comment

Same argument as above – keep unexpected import failures visible. An inline comment also explains the dual-path logic to future maintainers.

-try:
-    from cuda.bindings import runtime as cudart
-except ImportError:
-    from cuda import cudart
+try:
+    from cuda.bindings import runtime as cudart  # Preferred import (cuda-python ≥13)
+except ModuleNotFoundError:
+    from cuda import cudart  # Legacy path (cuda-python <13)
tensorrt_llm/auto_parallel/cluster_info.py (1)

9-12: Tighten the fallback exception scope

Mirror the pattern used elsewhere so only a missing module triggers the fallback.

-try:
-    from cuda.bindings import runtime as cudart
-except ImportError:
-    from cuda import cudart
+try:
+    from cuda.bindings import runtime as cudart
+except ModuleNotFoundError:
+    from cuda import cudart
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c23e8e7 and 2ea5b3a.

📒 Files selected for processing (7)
  • tensorrt_llm/_ipc_utils.py (1 hunks)
  • tensorrt_llm/_mnnvl_utils.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (1 hunks)
  • tensorrt_llm/auto_parallel/cluster_info.py (1 hunks)
  • tensorrt_llm/runtime/generation.py (1 hunks)
  • tensorrt_llm/runtime/multimodal_model_runner.py (1 hunks)
  • tests/microbenchmarks/all_reduce.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_mnnvl_utils.py
  • tensorrt_llm/runtime/multimodal_model_runner.py
  • tests/microbenchmarks/all_reduce.py
  • tensorrt_llm/auto_parallel/cluster_info.py
  • tensorrt_llm/_ipc_utils.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/runtime/generation.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_mnnvl_utils.py
  • tensorrt_llm/runtime/multimodal_model_runner.py
  • tests/microbenchmarks/all_reduce.py
  • tensorrt_llm/auto_parallel/cluster_info.py
  • tensorrt_llm/_ipc_utils.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/runtime/generation.py
🧠 Learnings (5)
📚 Learning: in tensorrt-llm, test files (files under tests/ directories) do not require nvidia copyright headers...
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tensorrt_llm/_mnnvl_utils.py
  • tensorrt_llm/runtime/multimodal_model_runner.py
  • tests/microbenchmarks/all_reduce.py
  • tensorrt_llm/auto_parallel/cluster_info.py
  • tensorrt_llm/_ipc_utils.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/runtime/generation.py
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tensorrt_llm/_mnnvl_utils.py
  • tensorrt_llm/runtime/multimodal_model_runner.py
  • tests/microbenchmarks/all_reduce.py
  • tensorrt_llm/auto_parallel/cluster_info.py
  • tensorrt_llm/_ipc_utils.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/runtime/generation.py
📚 Learning: in tensorrt-llm's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()...
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.

Applied to files:

  • tensorrt_llm/runtime/multimodal_model_runner.py
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/microbenchmarks/all_reduce.py
  • tensorrt_llm/runtime/generation.py
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx,cu,py} : all tensorrt-llm open source software code should contain...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-06T21:22:55.018Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx,cu,py} : All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Applied to files:

  • tensorrt_llm/_ipc_utils.py
🔇 Additional comments (3)
tensorrt_llm/runtime/generation.py (1)

32-35: LGTM! Well-implemented migration pattern.

This change correctly implements the fallback pattern to address the CUDA binding package deprecations. The try-except block attempts the new recommended import path first and gracefully falls back to the legacy import, ensuring backward compatibility while supporting the new package structure.

tests/microbenchmarks/all_reduce.py (1)

21-24: LGTM! Proper fallback import mechanism for CUDA runtime bindings.

The try-except block correctly implements the migration from deprecated cuda.cudart to the new cuda.bindings.runtime module, while maintaining backward compatibility. This addresses the deprecation warnings mentioned in the PR objectives.

tensorrt_llm/_ipc_utils.py (1)

20-24: LGTM! Proper fallback import mechanism for CUDA bindings.

The try-except block correctly implements the migration from deprecated cuda.cuda and cuda.cudart modules to the new cuda.bindings.driver and cuda.bindings.runtime modules, while maintaining backward compatibility with the same interface names.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14445 [ run ] triggered by Bot

@dcampora dcampora enabled auto-merge (squash) August 7, 2025 10:13
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14445 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10918 completed with status: 'FAILURE'

Signed-off-by: Yuan Tong <[email protected]>
@tongyuantongyu
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14479 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14479 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10937 completed with status: 'SUCCESS'

@dcampora dcampora merged commit db8dc97 into NVIDIA:main Aug 7, 2025
5 checks passed
Shunkangz pushed a commit to hcyezhang/TensorRT-LLM that referenced this pull request Aug 8, 2025
chzblych pushed a commit to chzblych/TensorRT-LLM that referenced this pull request Aug 12, 2025
chzblych pushed a commit to chzblych/TensorRT-LLM that referenced this pull request Aug 12, 2025
chzblych added a commit to chzblych/TensorRT-LLM that referenced this pull request Aug 12, 2025
chzblych added a commit that referenced this pull request Aug 12, 2025
…6808)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yiqing Yan <[email protected]>
@tongyuantongyu tongyuantongyu deleted the ytong/cuda_binding_new_name branch August 21, 2025 03:30
brb-nv pushed a commit to brb-nv/TensorRT-LLM that referenced this pull request Aug 22, 2025
…from main (NVIDIA#6808)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yiqing Yan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants