Skip to content

Commit 41a341a

Browse files
authored
[None][ci] Test waives for the release/1.0 branch 09/15 (#7700)
Signed-off-by: Yanchao Lu <[email protected]>
1 parent e5ba99c commit 41a341a

File tree

2 files changed

+4
-0
lines changed

2 files changed

+4
-0
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,3 +279,4 @@ accuracy/test_cli_flow.py::TestLlama3_8BInstructGradient1048k::test_long_context
279279
disaggregated/test_disaggregated.py::test_disaggregated_diff_max_tokens[TinyLlama-1.1B-Chat-v1.0] SKIP (https://nvbugs/5451272)
280280
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_bfloat16_4gpus_online_eplb[mtp_nextn=2] SKIP (https://nvbugs/5444687)
281281
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus_online_eplb[fp8kv=True] SKIP (https://nvbugs/5444687)
282+
accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_nvfp4_multi_gpus[latency_trtllmgen] SKIP (https://nvbugs/5516845)

tests/unittest/_torch/multi_gpu_modeling/test_deepseek.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,9 @@ def test_deepseek_streaming(model_name, backend, quant, tp_size):
3333
is_fp8 = quant == "fp8"
3434
is_fp4 = quant == "fp4"
3535

36+
if tp_size == 4:
37+
pytest.skip(f"https://nvbugs/5515753")
38+
3639
if torch.cuda.device_count() < tp_size:
3740
pytest.skip(f"Not enough GPUs available, need {tp_size} "
3841
f"but only have {torch.cuda.device_count()}")

0 commit comments

Comments
 (0)