Skip to content

Conversation

leo-pony
Copy link

@leo-pony leo-pony commented Sep 23, 2025

Added OOT platform interface e2e test that runs on Ascend NPU in a parallel and soft fail mode.

After a few months working, we found that the OOT platform plugin function is usually broken, this is caused by the broken interface change by vLLM. We want to add a new test job to test the OOT interface and function with vllm-ascend in real resources. Once the test fails, vllm-ascend can be noticed and fixed asap. The job will be:
- Runs in parallel with current other test cases per-PR
- The time required should be short. I think one or two well designed e2e tests is enough.
- The test result should be a soft fail(non-voting) to unblock others.

Changes

  1. Add the test job script in vLLM repository to run the real test
    Add e2e test environment build and test running script run-npu-test.sh to the vLLM repository. The location and content of the script are similar to HPU: first build test docker image, and then running test cases.
  2. Integrate NPU resources into the vLLM CI system.
    Self-host buildkite agents have been added into vllm buildkite CI pipeline. Currently confige is 4 PRs can parallel building. We have tested on my fork vLLM repository(this PR belongs to) combine with vllm-project/ci-infra in vllm buildkite CI, and it works okay.
  3. Enable the vllm-ascend test job in ci-infra repo
    Add a parallel and soft fail step(job) for vllm-ascend platform test in the test-template-ci.j2 of the project vllm-project/ci-infra. Detail code see branch ascend-npu-test of vllm-project/ci-infra.

Purpose

Once this OOT platform interface test fails, vllm-ascend can be noticed and fixed asap.

Test Plan

1.Function test: Test build job can be successfully executed with buildkite.
2.Parallelism test: Four buildkite agents can support four PR build tasks to be executed in parallel correctly
3.Performance test: The build time for each PR is about 10 minutes
4.Stability test: Buildkite agent runs normally for many consecutive days

Test Result

We test on fork vLLM repository(this PR belongs to) combine with 'Ascend NPU Test' job defined in ascend-npu-test branch of vllm-project/ci-infra on vllm buildkite CI pipeline, and everything is okay.

  1. New added self-hosted agents successfully connected to vllm buildkite CI organization cluster.
de4b5cf5877172ac9a855fdbaf9687ee
  1. Successfully run the building job(run-npu-test.sh runs in this job) in the vllm buildkite CI pipeline. The 'Ascend NPU Test' task ran for 9 minutes and 34 seconds.
ce1c57ad007e4ef057ccf07d8ae42180

The entire Build job was also successful.
76cb63278cd06e3f55f6cf3905877665

  1. Currently supports 4 PR parallel building and build time for each PR is about 10 minutes. I have tested self-host agents in my buildkite pipeline, and parallel jobs successfully run.
9ce145171f18d5c83fc4a728fcaf9945
  1. I have tested self-host agents in my buildkite pipeline for several consecutive days, and the tasks have all completed normally
132fedfb64a53ef36b651ff664a64446 5558435e08ccecb4a585a9f03b328bb9 2eaa3bb1a596b751f44b37c83b6ce432
Essential Elements of an Effective PR Description Checklist
  • [*] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • [*] The test plan, such as providing test command.
  • [*] The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the ci/build label Sep 23, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new CI job to run end-to-end tests for the OOT platform interface on Ascend NPU, which is a great addition for ensuring interface compatibility. The implementation uses a shell script to build a Docker image and execute tests. My review focuses on the correctness of this script. I've found a couple of critical issues in the script that need to be addressed. One is related to command substitution within the Dockerfile heredoc, and the other is a miscalculation of device indices for the Docker container. Please see my detailed comments below.

--builder cachebuilder --cache-from type=local,src=/mnt/docker-cache \
--cache-to type=local,dest=/mnt/docker-cache,mode=max \
--progress=plain --load -t ${image_name} -f - .
FROM quay.io/ascend/cann:8.3.rc1.alpha002-910b-ubuntu22.04-py3.11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the base image will be updated in the future. It's good to read the tag from vllm-ascend repo instead of hard code here. So that we don't need always update this file

# Install vllm-ascend
WORKDIR /workspace
ARG VLLM_REPO=https://github.com/vllm-project/vllm-ascend.git
ARG VLLM_TAG=main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VLLM_ASCEND_REPO
VLLM_ASCEND_TAG

"${image_name}" \
bash -c '
set -e
pytest -v -s tests/e2e/singlecard/test_sampler.py::test_models_topk
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please change the test case to a better one. you can create a test folder for vLLM e2e test in vllm-acend repo first. for example, pytest -v -s tests/e2e/vllm/

Then once we want to update the test case, we just need to update the test file in vllm-ascend. so that we don't need change this file.

@leo-pony leo-pony force-pushed the platform_interface_test_npu branch from c11cefb to 6b96618 Compare September 24, 2025 12:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants