Skip to content

Conversation

tomeras91
Copy link
Collaborator

@tomeras91 tomeras91 commented Jul 30, 2025

Summary by CodeRabbit

  • Tests
    • Re-enabled a previously skipped test for improved coverage.
    • Updated test prompts and reduced the maximum token count for sampling.
    • Enhanced test assertions with clearer and more descriptive error messages for easier debugging.

Description

This PR improves stability for the Nemotron-H with/without CUDA graph and overlap scheduler unittest.

This test generates greedily from a few prompts, and compares generated text and logits/logprobs when generating with and without CUDA graphs and overlap scheduler.

Flakiness in the test was caused due to generation steps where the top2 most probable tokens had the same probabilities, causing undefined behviour in the outcome of argmax(), leading to cases where different tokens were selected. Once a single token is different in a generated sequence, all subsequent tokens will be different as well (since they are generated given the previous different token) and comparing logits becomes meaningless.

Test stability was improved by choosing prompts for which the difference in probabilities between top2 most probable tokens is significant, so greedy sampling always results in the same outcome. Other than that, test observability was improved by enhancing assertion messages in case of failures.

Test Coverage

test_modeling_nemotron_h.py::test_nemotron_h_cuda_graph_overlap_scheduler now has better stability.

…add better visibility to failures

Signed-off-by: Tomer Asida <[email protected]>
Signed-off-by: Tomer Asida <[email protected]>
Copy link
Contributor

coderabbitai bot commented Jul 30, 2025

📝 Walkthrough

Walkthrough

The test function test_nemotron_h_cuda_graph_overlap_scheduler in the tests/unittest/_torch/modeling/test_modeling_nemotron_h.py file was updated by removing the skip decorator, modifying prompts, adjusting a parameter, and enhancing assertion diagnostics for improved test output clarity.

Changes

Cohort / File(s) Change Summary
Nemotron-H CUDA Graph Scheduler Test Enhancements
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
Removed @pytest.mark.skip to enable test execution, updated prompt set, reduced max_tokens from 12 to 10, and improved assertion statements with detailed error messages including prompt indices.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Suggested reviewers

  • HuiGao-NV
  • pcastonguay
  • Shixiaowei02

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (1)

289-289: Fix line length violation.

The line exceeds the 120-character limit as flagged by static analysis.

-        # similar to other unittests comparing with / without CG, compare logits of first generation step (2nd generated token)
+        # similar to other unittests comparing with / without CG, compare logits of
+        # first generation step (2nd generated token)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e7ae5e2 and 3b2dcaf.

📒 Files selected for processing (1)
  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a file, prefer docstrings over comments in Python.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without reflection.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
🧠 Learnings (1)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (1)

Learnt from: moraxu
PR: #6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

🪛 Ruff (0.12.2)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py

289-289: Line too long (127 > 120)

(E501)

🔇 Additional comments (5)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (5)

242-246: Improved prompt selection for deterministic greedy sampling.

The new prompts are well-chosen to ensure significant probability differences between top tokens, addressing the flakiness issue described in the PR objectives. These prompts should produce more deterministic argmax() results during greedy sampling.


248-248: Reduced token generation for focused testing.

The reduction from 12 to 10 tokens makes sense for a stability-focused test, reducing the likelihood of encountering problematic probability distributions in longer sequences.


275-287: Enhanced test diagnostics with explicit indexing.

The addition of explicit indexing and detailed assertion messages significantly improves test observability. The error messages now clearly identify which prompt failed and the nature of the mismatch.


290-307: Improved assertion diagnostics for logits and logprobs comparison.

The addition of descriptive msg lambda functions provides excellent context for test failures, making it much easier to diagnose issues when they occur. The messages clearly identify the prompt index and comparison type.


310-317: Enhanced overlap scheduler assertion with clear diagnostics.

The improved error message for overlap scheduler comparison maintains consistency with other assertions and provides clear context for any failures.

@tomeras91 tomeras91 requested a review from Copilot July 30, 2025 13:50
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes a flaky CUDA graph/overlap scheduler test for Nemotron-H by addressing non-deterministic token selection in greedy sampling. The fix involves replacing prompts that could lead to tied top-2 token probabilities with more deterministic ones, reducing the maximum token count, and improving test failure diagnostics.

  • Updated test prompts to use more deterministic patterns that avoid probability ties
  • Reduced maximum token generation from 12 to 10 tokens
  • Enhanced assertion messages with detailed failure context including prompt indices
Comments suppressed due to low confidence (1)

tests/unittest/_torch/modeling/test_modeling_nemotron_h.py:1

  • The pytest import is being removed but may still be needed for other tests in this file. Ensure that no other tests in this file use pytest decorators or fixtures.
import torch

@tomeras91
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13569 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13569 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10171 completed with status: 'FAILURE'

@tomeras91
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13579 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13579 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10179 completed with status: 'FAILURE'

@tomeras91
Copy link
Collaborator Author

/bot run

1 similar comment
@tomeras91
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13632 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13632 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10228 completed with status: 'FAILURE'

@tomeras91
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13663 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13663 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10257 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@tomeras91 tomeras91 merged commit 6d5da9f into NVIDIA:main Jul 31, 2025
3 checks passed
@tomeras91 tomeras91 deleted the fix-nemotron-h-flaky-cg-test branch August 3, 2025 09:02
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
…ap scheduler test (NVIDIA#6485)

Signed-off-by: Tomer Asida <[email protected]>
Signed-off-by: Lanyu Liao <[email protected]>
jain-ria pushed a commit to jain-ria/TensorRT-LLM that referenced this pull request Aug 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants