Skip to content

Conversation

@Shixiaowei02
Copy link
Collaborator

@Shixiaowei02 Shixiaowei02 commented Aug 18, 2025

Summary by CodeRabbit

  • New Features
    • Stricter validation for context-phase responses: requests lacking a valid context request ID now fail fast with a clear error.
  • Performance/Configuration
    • Reduced default max batch sizes for context and generation servers to improve stability and latency under load.
  • Bug Fixes
    • Clearer error handling for invalid disaggregated parameters in the context phase.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Shixiaowei02 Shixiaowei02 requested a review from a team as a code owner August 18, 2025 09:24
@Shixiaowei02 Shixiaowei02 requested a review from LinPoly August 18, 2025 09:24
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

📝 Walkthrough

Walkthrough

Added a null check for ctx_request_id in the context-phase response handling of the disaggregated server, raising ValueError if missing. Updated an integration test config to lower max_batch_size for context and generation servers.

Changes

Cohort / File(s) Summary of Changes
Context-phase validation logic
tensorrt_llm/serve/openai_disagg_server.py
Inserted validation ensuring disaggregated_params.ctx_request_id is non-null during context-phase response handling; raises ValueError on null. No signature changes.
Integration test config batch sizes
tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml
Updated context_servers.max_batch_size: 256 → 64; generation_servers.max_batch_size: 128 → 32.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant OpenAI_Disagg_Server as Disaggregated Server
  participant Context_Server

  Client->>OpenAI_Disagg_Server: Context request
  OpenAI_Disagg_Server->>Context_Server: Forward request
  Context_Server-->>OpenAI_Disagg_Server: Context response
  alt Validate response
    OpenAI_Disagg_Server->>OpenAI_Disagg_Server: Check choices==1, params!=None, ctx_request_id!=None
    opt Invalid
      OpenAI_Disagg_Server-->>Client: Raise ValueError
    end
    opt Valid
      OpenAI_Disagg_Server-->>Client: Proceed to next phase
    end
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e555d83 and 0946f6e.

📒 Files selected for processing (2)
  • tensorrt_llm/serve/openai_disagg_server.py (1 hunks)
  • tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tensorrt_llm/serve/openai_disagg_server.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml (2)

19-19: Lowered generation max_batch_size to 32 — no hardcoded 128 references found in tests

Ran:

rg -n -C3 -P '(?i)generation.*(max_batch_size|batch|concurrent).*(128|32)' tests

No matches were returned. Please manually confirm that any throughput or batching-behavior tests don’t assume the old 128 limit.


11-11: No tests pin old batch limits for disagg_config_diff_max_tokens.yaml — change is safe

Ran greps across the entire test suite for references to disagg_config_diff_max_tokens.yaml and for any hard-coded assertions of 256/128 batch sizes tied to it. No matches were found. The updated 64/32 settings will be picked up dynamically by the existing disaggregated tests.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Shixiaowei02 Shixiaowei02 changed the title [5450074][fix] Reduce the device memory requirements for testing [https://nvbugs/5450074][fix] Reduce the device memory requirements for testing Aug 18, 2025
@Shixiaowei02
Copy link
Collaborator Author

/bot run

@Shixiaowei02 Shixiaowei02 requested a review from chuangz0 August 18, 2025 09:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tensorrt_llm/serve/openai_disagg_server.py (1)

202-205: Harden choice count validation (zero choices will currently fall through and crash)

If choices is empty, choices[0] will raise. Align with your other check that enforces “exactly one choice.”

Apply this diff:

-        choices = ctx_response.choices
-        if len(choices) > 1:
-            raise ValueError("Disagg server returned more than one choice. This is currently not supported in disaggregated server.")
+        choices = ctx_response.choices
+        if len(choices) != 1:
+            raise ValueError("Context server did not return a single choice. This is not expected")
🧹 Nitpick comments (6)
tensorrt_llm/serve/openai_disagg_server.py (5)

207-208: Make the error message actionable by including ctx_server and the offending field

Including the server URL and field name significantly speeds up triage when a backend misbehaves.

Apply this diff:

-            raise ValueError("Invalid disaggregated params in context phase response.")
+            raise ValueError(f"Invalid disaggregated params in context phase response from {ctx_server}: ctx_request_id is None")

147-147: Fix error messages that intend to interpolate type names

The braces won’t interpolate without f-strings, making the messages confusing when debugging.

Apply this diff:

-                    raise TypeError("Invalid request type: {type(gen_req).__name__}")
+                    raise TypeError(f"Invalid request type: {type(gen_req).__name__}")
@@
-                    raise ValueError("Invalid request type: {type(req).__name__}")
+                    raise ValueError(f"Invalid request type: {type(req).__name__}")

Also applies to: 259-259


106-111: create_error_response ignores the caller-provided status and always raises 500

This will misclassify client vs server errors. If this helper is used, it should honor the status code and not mask it as 500.

Apply this diff:

-    def create_error_response(
-            message: str,
-            status_code: HTTPStatus = HTTPStatus.BAD_REQUEST) -> ErrorResponse:
-        raise HTTPException(status_code=500, detail=f"Internal server error {message}")
+    def create_error_response(
+            message: str,
+            status_code: HTTPStatus = HTTPStatus.BAD_REQUEST) -> None:
+        # Always raise; callers rely on exception flow. Preserve the provided status.
+        raise HTTPException(status_code=int(status_code), detail=str(message))

Additionally, add this import near the other typing imports:

from typing import NoReturn  # if you prefer `-> NoReturn` as return type

34-34: Typo prevents re-enabling yapf formatting

“enale” should be “enable”; otherwise yapf may remain disabled for the rest of the file.

Apply this diff:

-# yapf: enale
+# yapf: enable

385-394: Avoid shadowing builtins (“iter”) in readiness loop

Minor readability nit: use a different name to avoid shadowing the built-in iter().

Apply this diff:

-            iter = 0
+            attempt = 0
@@
-                logger.info(
-                    f"[{iter}] Servers are not ready. Waiting for {unready_servers}..."
-                )
+                logger.info(f"[{attempt}] Servers are not ready. Waiting for {unready_servers}...")
@@
-                iter += 1
+                attempt += 1
tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml (1)

1-24: Batch/Token Limits in Disaggregated Test Configs

I scanned all tests/integration/defs/disaggregated YAMLs and found that while you’ve added lower values in each file, most still include a second “default” section with high limits (e.g. max_batch_size: 256, max_num_tokens: 4096). To keep test memory usage consistently low, consider aligning or removing those high defaults.

• Representative files with high defaults:
– test_configs/disagg_config_cache_aware_balance.yaml
– test_configs/disagg_config_ctxpp2_genpp2.yaml
– test_configs/disagg_config_ctxtp2_genpp2.yaml
– test_configs/disagg_config_ctxtp2pp2_gentp2pp2.yaml
– …and several others that still list 256/4096.

Recommendation: Reduce or remove the second block of limits in each config so tests only run with the smaller batch/token settings you intend.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 28526fe and e555d83.

📒 Files selected for processing (2)
  • tensorrt_llm/serve/openai_disagg_server.py (1 hunks)
  • tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/serve/openai_disagg_server.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/serve/openai_disagg_server.py
🔇 Additional comments (3)
tensorrt_llm/serve/openai_disagg_server.py (1)

207-208: Good defensive check for missing ctx_request_id

The explicit None-check on ctx_request_id before proceeding to generation helps catch malformed upstream responses early.

tests/integration/defs/disaggregated/test_configs/disagg_config_diff_max_tokens.yaml (2)

11-11: Lowering context max_batch_size to 64 aligns with the PR goal to reduce test-time VRAM footprint

Change looks good and should help tests pass on smaller GPUs.


19-19: Lowering generation max_batch_size to 32 further reduces device memory usage

LGTM for memory-constrained CI/test environments.

@Shixiaowei02
Copy link
Collaborator Author

/bot run

@Shixiaowei02
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15685 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15685 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #195 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Shixiaowei02
Copy link
Collaborator Author

/bot skip --comment "It has passed in 15685"

@Shixiaowei02 Shixiaowei02 enabled auto-merge (squash) August 21, 2025 06:28
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16014 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16014 [ skip ] completed with state SUCCESS
Skipping testing for commit 2cd62b9

@Shixiaowei02 Shixiaowei02 requested a review from kaiyux August 22, 2025 09:05
@Shixiaowei02 Shixiaowei02 merged commit 3ee8523 into NVIDIA:release/1.0 Aug 22, 2025
4 checks passed
@kaiyux kaiyux deleted the xiaoweis/fix-1.0 branch August 22, 2025 09:33
yuanjingx87 pushed a commit that referenced this pull request Aug 28, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants