Skip to content

Conversation

tomeras91
Copy link
Collaborator

@tomeras91 tomeras91 commented Aug 18, 2025

Summary by CodeRabbit

  • Tests
    • Re-enabled a previously skipped CUDA-graph overlap scheduling test.
    • Added precise assertions comparing behavior with and without overlap under CUDA graphs, including checks on generated logits and decode log probabilities.
    • Tightened numerical tolerances to increase sensitivity of comparisons.
    • Updated assertion messages to clearly reflect CUDA-graph scenarios.
    • Overall, improves test coverage and confidence in CUDA-graph overlap behavior without affecting user-facing functionality.

Description

Make test less flaky by not comparing all logits for all generated steps, but just all logits for first generation step and just generated logits for all generation steps (similar to what is tested for CUDA graphs in this test).
Still use a small tolerance since overlap scheduler should not affect logits.

Test Coverage

tests/unittest/_torch/modeling/test_modeling_nemotron_h.py::test_nemotron_h_cuda_graph_overlap_scheduler now passes and should not be flaky

…erated logprob for all tokens in overlap scheduler assertion as well

Signed-off-by: Tomer Asida <[email protected]>
Signed-off-by: Tomer Asida <[email protected]>
@tomeras91 tomeras91 requested review from a team as code owners August 18, 2025 12:30
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

📝 Walkthrough

Walkthrough

Unskips and updates a Nemotron-H CUDA graph overlap scheduler unit test to run comparisons between CG no-overlap and CG with-overlap runs. Tightens tolerances to 0.05 for logits and decode logprobs, updates assertion messages, and aligns logprob checks to use extract_decode_logprobs on CG variants.

Changes

Cohort / File(s) Change Summary
Tests: Nemotron-H CUDA graph overlap scheduler
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
Remove skip on test_nemotron_h_cuda_graph_overlap_scheduler; replace comparisons to focus on CG no-overlap vs CG with-overlap; add assertions for generation logits (2nd token) and decode logprobs; tighten atol/rtol from 0.2 to 0.05; update assertion messages; use extract_decode_logprobs for CG variants.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • Shixiaowei02
  • pcastonguay
  • omera-nv
  • HuiGao-NV
  • netanel-haber

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tomeras91 tomeras91 requested a review from Copilot August 18, 2025 12:31
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes a flaky test for Nemotron-H CUDA graph with overlap scheduler by reducing the scope of logits comparison. Instead of comparing all logits for all generation steps, it now compares all logits only for the first generation step and logprobs for all generated tokens, making the test more stable while maintaining adequate validation.

Key changes:

  • Removed the pytest.mark.skip decorator to re-enable the test
  • Modified logits comparison to focus on first generation step only
  • Added separate logprobs comparison for all generated tokens

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (2)

319-319: Fix Ruff E501 long line in comment (exceeds 120 chars)

Break the comment into shorter lines to satisfy the linter.

Apply this diff:

-        # Similar comparison for with / without overlap scheduler, compare logits of first generation step (2nd generated token)
+        # Compare with vs without overlap scheduler
+        # Compare logits of first generation step (2nd generated token)

1-1: Missing NVIDIA copyright header

Per repo guidelines, prepend the current-year NVIDIA copyright header.

Add at the top of the file (before imports):

# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 55f4f2d and 988234f.

📒 Files selected for processing (1)
  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
🪛 Ruff (0.12.2)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py

319-319: Line too long (128 > 120)

(E501)

🔇 Additional comments (2)
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (2)

319-328: Solid focus on first-step logits with tight tolerance—aligns with the stated flakiness fix

Comparing all logits only for the first generation step and tightening atol/rtol to 0.05 for the overlap scheduler case is appropriate and mirrors the CUDA-graphs pattern. This should materially reduce flakiness while still catching regressions (scheduler should not affect logits).


330-338: Good choice to compare only generated-token logprobs for subsequent steps

Using extract_decode_logprobs to match only the selected tokens across the decode steps is the right granularity and should further minimize spurious diffs. The tighter tolerances are reasonable here as well.

@tomeras91
Copy link
Collaborator Author

/bot run

1 similar comment
@tomeras91
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15615 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15615 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11754 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@tomeras91 tomeras91 changed the title [https://nvbugs/1234567][fix] Fix Nemotron-H flaky CUDA graph / overlap scheduler test [https://nvbugs/5458874][fix] Fix Nemotron-H flaky CUDA graph / overlap scheduler test Aug 19, 2025
Copy link
Collaborator

@danielafrimi danielafrimi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tomeras91 tomeras91 merged commit f0bfb49 into NVIDIA:main Aug 19, 2025
6 checks passed
@tomeras91 tomeras91 deleted the fix-nemotron-h-flaky-cg-test2 branch August 19, 2025 12:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants