Skip to content

[Benchmark] Upstreaming text embeddings benchmarks #54912

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

crypdick
Copy link
Contributor

Why are these changes needed?

Upstreams the text embeddings benchmarks to fill out dashboards

Related issue number

NA

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@crypdick crypdick added the data Ray Data-related issues label Jul 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @crypdick, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive set of new benchmarks for text embeddings, designed to populate performance dashboards. It includes the core benchmark logic, defines specific cluster configurations optimized for GPU and CPU workloads, and integrates these benchmarks into the existing release test suite, including variations to simulate node preemption.

Highlights

  • New Text Embeddings Benchmark: I've added a new Python script (text_embeddings_benchmark_5x.py) to benchmark text embeddings jobs. This script processes documents from an S3 source, performs text chunking using langchain_text_splitters, and generates embeddings with SentenceTransformer, measuring key performance metrics like runtime and throughput. It's designed to handle a 5000-document dataset, which is 5x larger than previous benchmarks.
  • Dedicated Cluster Configurations: To support the new benchmarks, I've introduced two new YAML files (autoscaling_gpu_g6e_2xl_aws.yaml and fixed_size_gpu_g6e_2xl_aws.yaml). These define specific AWS cluster setups with both GPU (g6e.2xlarge) and CPU (m5.4xlarge) instances, tailored for running these text embedding workloads.
  • Benchmark Environment Setup: A new shell script (byod_text_embeddings_benchmark.sh) has been added to manage the installation of all necessary Python dependencies for the text embeddings benchmark, ensuring a consistent and reproducible environment.
  • Integration into Release Tests: I've updated release_data_tests.yaml to include the new text embeddings benchmarks. This integration covers both standard runs and simulated preemptible (chaos) test variations, leveraging the newly defined cluster configurations and the environment setup script to provide comprehensive testing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new text embeddings benchmark, including the benchmark script, cluster configurations, and test definitions. The changes look good overall, but I've found several issues in the benchmark script itself. These include a bug in file processing logic, silent error handling, and a critical flaw in the benchmarking methodology that leads to re-computation of the entire pipeline. I've provided suggestions to fix these issues to ensure the benchmark is correct and provides accurate metrics.

@crypdick
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces new text embedding benchmarks, including configurations for autoscaling and fixed-size clusters, the benchmark script itself, and the necessary test definitions. The changes are well-structured, but I've identified a few critical and high-severity issues that should be addressed to ensure the benchmarks are efficient, correct, and reproducible.

My main concerns are:

  1. A critical inefficiency in the benchmark script that causes the data pipeline to be executed twice, leading to wasted resources and potentially misleading metrics.
  2. A race condition in the test configuration that could cause concurrent test runs to overwrite each other's output.
  3. Unpinned dependencies in the setup script, which can affect the reproducibility of the benchmark results.

I've provided detailed comments and suggestions for each of these points in the review. Addressing these will significantly improve the quality and reliability of these new benchmarks.

Comment on lines +6 to +14
python3 -m pip install --no-cache-dir \
"unstructured[all-docs]" \
"sentence-transformers" \
"langchain_text_splitters" \
"pandas" \
"tiktoken" \
"vllm==0.7.2" \
"xgrammar==0.1.11" \
"pynvml==12.0.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

For reproducible benchmark runs, it's critical to pin all dependency versions. Unpinned dependencies can lead to unexpected failures or performance variations, making benchmark results unreliable. Please pin all packages in this script.

Additionally, the vllm dependency appears to be unused in the benchmark script, which relies on sentence-transformers. As vllm is a substantial dependency, please remove it if it's not required.

@@ -567,6 +567,49 @@
python setup_chaos.py --chaos TerminateEC2InstanceWithGracePeriod
--batch-size-to-kill 10 --max-to-kill 100 --kill-delay 120

- name: text_embeddings_benchmark_5x_{{scaling}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment for linking

crypdick added 4 commits July 28, 2025 11:19
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data Ray Data-related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants