Skip to content

Conversation

xinhe-nv
Copy link
Collaborator

@xinhe-nv xinhe-nv commented Jun 27, 2025

add chunked prefill tests for llama4, https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/912/testReport/B200.accuracy.test_llm_api_pytorch/TestLlama4MaverickInstruct/

Summary by CodeRabbit

  • Tests
    • Added new tests for chunked prefill functionality with different attention backends.
    • Updated test lists to include the new chunked prefill tests for multiple model classes.
    • Reorganized and relocated some test entries between CLI flow and PyTorch API test lists.
    • Added new tests for specific CUDA graph configurations.

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch 2 times, most recently from 0eedd1c to e1449bb Compare July 1, 2025 04:22
@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from e1449bb to 26010dc Compare July 1, 2025 09:44
@xinhe-nv xinhe-nv marked this pull request as ready for review July 1, 2025 09:45
@xinhe-nv
Copy link
Collaborator Author

xinhe-nv commented Jul 1, 2025

@schetlur-nv llama4 with chunked prefill will be OOM, test log. Is it known issues? It seems like that the feature has done https://jirasw.nvidia.com/browse/TRTLLM-5484.

@xinhe-nv xinhe-nv requested a review from mikeiovine July 3, 2025 09:54
@xinhe-nv xinhe-nv closed this Jul 4, 2025
@xinhe-nv xinhe-nv deleted the user/xinhe/add-cases branch July 4, 2025 08:47
@xinhe-nv xinhe-nv restored the user/xinhe/add-cases branch July 4, 2025 09:08
@xinhe-nv xinhe-nv reopened this Jul 7, 2025
@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from 26010dc to ef7047d Compare July 22, 2025 02:55
Copy link
Contributor

coderabbitai bot commented Jul 22, 2025

📝 Walkthrough

"""

Walkthrough

A new parameterized test method, test_chunked_prefill, was added to the PyTorch API accuracy integration tests for Llama4MaverickInstruct. Test list files were updated to include this test for multiple model classes and attention backends, reorganize existing tests between test classes, and add new Scout tests with CUDA graph configurations. No changes were made to exported code entities outside of test declarations.

Changes

File(s) Change Summary
tests/integration/defs/accuracy/test_llm_api_pytorch.py Added test_chunked_prefill method to TestLlama4MaverickInstruct, parameterized over attention backends.
tests/integration/test_lists/qa/examples_test_list.txt Moved two tests between test classes; added new test_chunked_prefill entries for TestLlama3_1_8BInstruct and TestLlama4MaverickInstruct.
tests/integration/test_lists/qa/llm_sanity_test.txt Added multiple test_chunked_prefill entries for various classes and backends; appended new Scout tests with CUDA graph configurations.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~6 minutes

Possibly related PRs

Suggested reviewers

  • achartier
  • yilin-void

Poem

In the warren of tests, new chunks hop in,
Prefill and backend, let the checks begin!
FlashInfer and TRTLLM, a bunny’s delight,
Models and lists, all snuggled tight.
With every new entry, the garden grows bright—
🐇 Code review by moon and by sunlight!
"""


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c06792a and a8717f2.

📒 Files selected for processing (3)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/qa/examples_test_list.txt (3 hunks)
  • tests/integration/test_lists/qa/llm_sanity_test.txt (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/integration/test_lists/qa/llm_sanity_test.txt
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/qa/examples_test_list.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

414-415: Document why overlap scheduler is disabled.

The disable_overlap_scheduler=True configuration should be documented to explain why this is necessary for chunked prefill testing.

        pytorch_config = dict(attn_backend=attn_backend,
-                             disable_overlap_scheduler=True)
+                             disable_overlap_scheduler=True)  # Required for chunked prefill stability
tests/integration/test_lists/qa/examples_test_list.txt (1)

457-458: Missing TIMEOUT annotation & asymmetry with preceding block.

For parity with the Llama-3 chunked-prefill entries (lines 440-441) add explicit timeouts, e.g.:

-accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=FLASHINFER]
-accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=TRTLLM]
+accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=FLASHINFER] TIMEOUT (60)
+accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=TRTLLM] TIMEOUT (90)

Without a timeout the runner may default to an unsuitable global value or silently use the previous test’s timeout.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fddb7f1 and ef7047d.

📒 Files selected for processing (4)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/qa/examples_test_list.txt (3 hunks)
  • tests/integration/test_lists/qa/llm_sanity_test.txt (1 hunks)
  • tests/integration/test_lists/waives.txt (1 hunks)
🔇 Additional comments (3)
tests/integration/test_lists/qa/llm_sanity_test.txt (1)

21-21: Ensure Llama4 chunked_prefill tests are included where intended.

We’ve located all chunked_prefill entries across the QA test lists:

  • tests/integration/test_lists/qa/llm_sanity_test.txt
    • accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_chunked_prefill[attn_backend=FLASHINFER] TIMEOUT (60)
  • tests/integration/test_lists/qa/examples_test_list.txt
    • accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_chunked_prefill[attn_backend=FLASHINFER] TIMEOUT (60)
    • accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_chunked_prefill[attn_backend=TRTLLM] TIMEOUT (90)
    • accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=FLASHINFER]
    • accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_chunked_prefill[attn_backend=TRTLLM]

It appears only examples_test_list.txt includes the new Llama4 tests, while llm_sanity_test.txt remains limited to Llama3.1. Please confirm whether:

  • llm_sanity_test.txt should also register TestLlama4MaverickInstruct::test_chunked_prefill entries,
  • or if Llama4 tests belong solely in examples_test_list.txt per the PR’s scope.
tests/integration/test_lists/waives.txt (1)

435-437: Looks good – waiver entries correctly protect the new chunked-prefill tests.

The two test_chunked_prefill parametrisations are now guarded behind bug 5345391, preventing the OOM from breaking CI. No further action required here.

tests/integration/test_lists/qa/examples_test_list.txt (1)

440-441: Nice addition, but keep timeout style consistent.

The Llama-3 chunked-prefill tests have explicit TIMEOUT (60|90) tags – perfect.
No change required other than making sure the surrounding grouping comment explains these are new chunked-prefill sanity tests.

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch 2 times, most recently from 0d5dd5c to c964f77 Compare July 22, 2025 03:06
@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch 2 times, most recently from aaa21e8 to fd58042 Compare July 23, 2025 06:49
@xinhe-nv xinhe-nv enabled auto-merge (squash) July 23, 2025 06:53
@xinhe-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12667 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12667 [ run ] completed with state FAILURE

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from fd58042 to e2e5e5b Compare July 23, 2025 07:46
@xinhe-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12675 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12675 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9426 completed with status: 'FAILURE'

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from e2e5e5b to 310ed76 Compare July 24, 2025 07:16
@coderabbitai coderabbitai bot requested a review from Shixiaowei02 July 24, 2025 11:11
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12844 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12844 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9574 completed with status: 'SUCCESS'

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from cec28aa to 979cc1e Compare July 24, 2025 15:22
@xinhe-nv
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12872 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12872 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #12844 for commit 979cc1e

@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch 3 times, most recently from 368e91d to a29ae50 Compare July 25, 2025 01:33
@coderabbitai coderabbitai bot requested a review from crazydemo July 25, 2025 01:33
@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from a29ae50 to c06792a Compare July 25, 2025 02:30
@coderabbitai coderabbitai bot requested review from achartier and yilin-void July 25, 2025 02:30
@xinhe-nv xinhe-nv force-pushed the user/xinhe/add-cases branch from c06792a to a8717f2 Compare July 25, 2025 02:35
@xinhe-nv
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12931 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12931 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #12844 for commit a8717f2

@xinhe-nv xinhe-nv merged commit 6268a60 into NVIDIA:main Jul 25, 2025
3 checks passed
@xinhe-nv xinhe-nv deleted the user/xinhe/add-cases branch July 25, 2025 03:05
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Shreyas Misra <[email protected]>
Ransiki pushed a commit to Ransiki/TensorRT-LLM that referenced this pull request Jul 29, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Ransiki Zhang <[email protected]>
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: Lanyu Liao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants