Skip to content

Conversation

brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented Aug 15, 2025

Description

This MR unwaives the tests for Gemma3 27B model. They pass for me locally on H100.
Also, waiving Gemma3 1B test only on L40S. The test must pass on other SKUs.
While Gemma3 27B would also fail if run on L40S, it would go OOM during weight-loading and shouldn't be run there anyway.

Update after CI:
Test pass in post-merge / pre-merge of CI for this MR.

Test Coverage

$ pytest tests/integration/defs/accuracy/test_llm_api_pytorch.py::TestGemma3_27BInstruct::test_auto_dtype -s -v
$ pytest tests/integration/defs/test_e2e.py::test_ptp_quickstart_multimodal[gemma-3-27b-it-gemma/gemma-3-27b-it-image-True] -s -v
$ pytest tests/integration/defs/test_e2e.py::test_ptp_quickstart_multimodal[gemma-3-27b-it-gemma/gemma-3-27b-it-image-False] -s -v

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Tests

    • Removed an outdated waiver and moved several Gemma auto-dtype waivers to different CI tags/environments.
    • Added explicit kv-cache fraction and increased max sequence length flags for select Gemma multimodal test runs.
  • Chores

    • Routine maintenance of test configurations and waivers; no user-facing functionality changes.

Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

📝 Walkthrough

Walkthrough

Updated test waivers and test-suite test settings: waiver entries/paths in tests/integration/test_lists/waives.txt were updated (one SKIP removed, several moved to full:L40S/...), Kv-cache parameters added in an accuracy test, and the multimodal quickstart test CLI now receives --kv_cache_fraction=0.5 and an increased max-seq flag for Gemma3-27B.

Changes

Cohort / File(s) Summary
Waiver updates
tests/integration/test_lists/waives.txt
Edited SKIP/waiver entries: removed one full:B200/... SKIP and moved/updated several Gemma3 auto-dtype SKIPs into full:L40S/... with adjusted nvbugs IDs. No test logic changed.
KvCache / LLM args in accuracy test
tests/integration/defs/accuracy/test_llm_api_pytorch.py
In TestGemma3_27BInstruct.test_auto_dtype, KvCacheConfig is instantiated with free_gpu_memory_fraction=0.5; LLM is initialized with additional args max_batch_size=128 and max_seq_len=4096.
Multimodal quickstart CLI flags
tests/integration/defs/test_e2e.py
In test_ptp_quickstart_multimodal for model_name == "gemma-3-27b-it", appended --kv_cache_fraction=0.5 and --max_seq_len=1024 to the quickstart CLI invocation (alongside existing flags).

Sequence Diagram(s)

sequenceDiagram
    participant Runner as Test Runner
    participant Test as test_ptp_quickstart_multimodal
    participant CLI as Quickstart CLI
    participant Model as Gemma runtime

    Runner->>Test: run multimodal test for gemma-3-27b-it
    Test->>CLI: build command (image_format, attention_backend, disable_kv_cache_reuse, --kv_cache_fraction=0.5, --max_seq_len=1024)
    CLI->>Model: start with provided args
    Model-->>CLI: initialize kv-cache using free_gpu_memory_fraction=0.5 / apply seq len
    CLI-->>Test: exit status/results
    Test-->>Runner: report result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

Suggested reviewers

  • venkywonka
  • EmmaQiaoCh
  • liji-nv

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/integration/test_lists/waives.txt (1)

234-234: Optional: include a short rationale alongside the bug link

Adding a brief reason (e.g., “OOM on L40S”) helps triage without opening nvbugs, similar to other entries that include inline reasons.

For example:

-full:L40S/accuracy/test_llm_api_pytorch.py::TestGemma3_1BInstruct::test_auto_dtype SKIP (https://nvbugs/5375620)
+full:L40S/accuracy/test_llm_api_pytorch.py::TestGemma3_1BInstruct::test_auto_dtype SKIP (https://nvbugs/5375620 - OOM on L40S)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6037fe3 and f76b2c4.

📒 Files selected for processing (1)
  • tests/integration/test_lists/waives.txt (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/test_lists/waives.txt
🔇 Additional comments (1)
tests/integration/test_lists/waives.txt (1)

234-234: LGTM: 1B waiver correctly scoped to L40S with updated nvbugs reference

Scoping Gemma3_1BInstruct auto-dtype to L40S only aligns with the PR goal. The nvbugs link is present and consistent with the file’s conventions.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15455 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15455 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11644 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/unwaive-gemma3-tests-main branch 2 times, most recently from 20dd67c to 1dc8136 Compare August 15, 2025 18:04
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tests/integration/defs/test_e2e.py (1)

2276-2284: Confirm kv_cache_fraction wiring OK; waivers missing for Gemma3-27B on L40S — please add or confirm

Short summary:

  • quickstart_multimodal correctly consumes --kv_cache_fraction and passes it to setup_llm; setup_llm (quickstart_advanced) maps args.kv_cache_fraction → KvCacheConfig.free_gpu_memory_fraction.
  • I could not find a waiver that skips gemma-3-27b-it on L40S in tests/integration/test_lists/waives.txt — Gemma3-27B tests are present elsewhere, so add a waiver if you intend to skip it on L40S.

Locations checked:

  • examples/llm-api/quickstart_multimodal.py — sets args.disable_kv_cache_reuse = True and defaults args.kv_cache_fraction = 0.6, then calls setup_llm(args, ...) (consumes the arg).
  • examples/llm-api/quickstart_advanced.py — defines --kv_cache_fraction and in setup_llm builds KvCacheConfig(free_gpu_memory_fraction=args.kv_cache_fraction, ...).
  • tests/integration/defs/test_e2e.py — includes the gemma-3-27b-it param and the special Gemma3 VLM flags (image_format=pil, attention_backend=FLASHINFER, --disable_kv_cache_reuse, --kv_cache_fraction=0.5).
  • tests/integration/test_lists/waives.txt — no entry matching gemma-3-27b-it or TestGemma3_27BInstruct::test_auto_dtype (ran repo-wide search; no matches).
  • tests/integration/defs/test-db/l0_h100.yml — lists accuracy/test_llm_api_pytorch.py::TestGemma3_27BInstruct::test_auto_dtype (so the test exists in the database).

Action requested:

  • If Gemma3-27B should be skipped on L40S, add the appropriate SKIP entry to tests/integration/test_lists/waives.txt (mirroring the existing format, e.g. full:L40S/...::TestGemma3_27BInstruct::test_auto_dtype SKIP (reason)). If skipping is not intended, no change required.
🧹 Nitpick comments (3)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

798-803: Avoid magic number: extract 0.5 into a named constant

To keep this aligned with other tests and avoid scattering literals, consider a module-level constant and refer to it here.

Apply this minimal change in-place:

-            free_gpu_memory_fraction=0.5,
+            free_gpu_memory_fraction=KV_FREE_MEM_FRACTION,

And add near the top of this file (outside the selected range):

KV_FREE_MEM_FRACTION = 0.5  # fraction of device memory reserved for KV cache in tests
tests/integration/defs/test_e2e.py (2)

2280-2284: Prefer using the module constant instead of a literal

You already define _MEM_FRACTION_50 = 0.5 at Line 44. Use it here to avoid hardcoding.

-        cmd.append("--kv_cache_fraction=0.5")
+        cmd.append(f"--kv_cache_fraction={_MEM_FRACTION_50}")

1-2: Header year range likely needs 2025

This file has been modified this year; consider updating the SPDX copyright line to include 2025 for consistency with other updated files.

Example:

-# SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 20dd67c and 1dc8136.

📒 Files selected for processing (3)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (1 hunks)
  • tests/integration/test_lists/waives.txt (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/integration/test_lists/waives.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/test_e2e.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/test_e2e.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

798-803: LGTM: adding free_gpu_memory_fraction=0.5 is the right mitigation for Gemma3-27B OOMs

Matches the intent to constrain KV allocation while keeping reuse disabled and FlashInfer attention. This aligns with the e2e quickstart change for consistency across paths.

tests/integration/defs/test_e2e.py (1)

2276-2284: LGTM: pass --kv_cache_fraction=0.5 for gemma-3-27b-it quickstart

This mirrors the LLM API test’s KV fraction and the reuse-disable setting, reducing memory pressure while preserving correctness with the FLASHINFER backend.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15470 [ run ] triggered by Bot

Signed-off-by: Balaram Buddharaju <[email protected]>
@brb-nv brb-nv force-pushed the user/brb/unwaive-gemma3-tests-main branch from 1dc8136 to cb0f020 Compare August 15, 2025 18:53
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

796-803: Add missing L40S waiver entry

The L40S waiver for TestGemma3_27BInstruct.test_auto_dtype was not found in tests/integration/test_lists/waives.txt. Please add the following line:

+ full:L40S/accuracy/test_llm_api_pytorch.py::TestGemma3_27BInstruct::test_auto_dtype

Additionally, verify that the E2E quickstart test for gemma-3-27b-it in tests/integration/defs/test_e2e.py includes both --kv_cache_fraction and --max_seq_len flags.

Files to update:

  • tests/integration/test_lists/waives.txt: add the waiver entry above
  • tests/integration/defs/test_e2e.py: ensure gemma-3-27b-it CLI invocation contains --kv_cache_fraction and --max_seq_len
🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

805-811: Attn backend choice looks right; drop redundant cuda_graph_config=None and sanity-check max_batch_size=128

  • Using attn_backend="FLASHINFER" here makes sense for Gemma3 custom masks.
  • Passing cuda_graph_config=None is redundant (default is None). Consider removing for clarity.
  • max_batch_size=128 increases reserved memory. If not strictly needed for these accuracy tasks, consider lowering (e.g., 64) or add a brief comment justifying 128, to avoid pushing memory on borderline SKUs in other environments.

Apply this minimal cleanup:

 with LLM(self.MODEL_PATH,
          kv_cache_config=kv_cache_config,
          attn_backend="FLASHINFER",
-         cuda_graph_config=None,
          max_batch_size=128,
          max_seq_len=4096) as llm:
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1dc8136 and cb0f020.

📒 Files selected for processing (3)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (1 hunks)
  • tests/integration/test_lists/waives.txt (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/integration/defs/test_e2e.py
  • tests/integration/test_lists/waives.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15475 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15470 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15475 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #11656 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15478 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15478 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11658 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@brb-nv brb-nv merged commit 9505727 into NVIDIA:main Aug 15, 2025
6 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 17, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 18, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants