-
Notifications
You must be signed in to change notification settings - Fork 6.7k
[Benchmark] Upstreaming text embeddings benchmarks #54912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Ricardo Decal <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @crypdick, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a comprehensive set of new benchmarks for text embeddings, designed to populate performance dashboards. It includes the core benchmark logic, defines specific cluster configurations optimized for GPU and CPU workloads, and integrates these benchmarks into the existing release test suite, including variations to simulate node preemption.
Highlights
- New Text Embeddings Benchmark: I've added a new Python script (
text_embeddings_benchmark_5x.py
) to benchmark text embeddings jobs. This script processes documents from an S3 source, performs text chunking usinglangchain_text_splitters
, and generates embeddings withSentenceTransformer
, measuring key performance metrics like runtime and throughput. It's designed to handle a 5000-document dataset, which is 5x larger than previous benchmarks. - Dedicated Cluster Configurations: To support the new benchmarks, I've introduced two new YAML files (
autoscaling_gpu_g6e_2xl_aws.yaml
andfixed_size_gpu_g6e_2xl_aws.yaml
). These define specific AWS cluster setups with both GPU (g6e.2xlarge) and CPU (m5.4xlarge) instances, tailored for running these text embedding workloads. - Benchmark Environment Setup: A new shell script (
byod_text_embeddings_benchmark.sh
) has been added to manage the installation of all necessary Python dependencies for the text embeddings benchmark, ensuring a consistent and reproducible environment. - Integration into Release Tests: I've updated
release_data_tests.yaml
to include the new text embeddings benchmarks. This integration covers both standard runs and simulated preemptible (chaos) test variations, leveraging the newly defined cluster configurations and the environment setup script to provide comprehensive testing.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new text embeddings benchmark, including the benchmark script, cluster configurations, and test definitions. The changes look good overall, but I've found several issues in the benchmark script itself. These include a bug in file processing logic, silent error handling, and a critical flaw in the benchmarking methodology that leads to re-computation of the entire pipeline. I've provided suggestions to fix these issues to ensure the benchmark is correct and provides accurate metrics.
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces new text embedding benchmarks, including configurations for autoscaling and fixed-size clusters, the benchmark script itself, and the necessary test definitions. The changes are well-structured, but I've identified a few critical and high-severity issues that should be addressed to ensure the benchmarks are efficient, correct, and reproducible.
My main concerns are:
- A critical inefficiency in the benchmark script that causes the data pipeline to be executed twice, leading to wasted resources and potentially misleading metrics.
- A race condition in the test configuration that could cause concurrent test runs to overwrite each other's output.
- Unpinned dependencies in the setup script, which can affect the reproducibility of the benchmark results.
I've provided detailed comments and suggestions for each of these points in the review. Addressing these will significantly improve the quality and reliability of these new benchmarks.
python3 -m pip install --no-cache-dir \ | ||
"unstructured[all-docs]" \ | ||
"sentence-transformers" \ | ||
"langchain_text_splitters" \ | ||
"pandas" \ | ||
"tiktoken" \ | ||
"vllm==0.7.2" \ | ||
"xgrammar==0.1.11" \ | ||
"pynvml==12.0.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reproducible benchmark runs, it's critical to pin all dependency versions. Unpinned dependencies can lead to unexpected failures or performance variations, making benchmark results unreliable. Please pin all packages in this script.
Additionally, the vllm
dependency appears to be unused in the benchmark script, which relies on sentence-transformers
. As vllm
is a substantial dependency, please remove it if it's not required.
@@ -567,6 +567,49 @@ | |||
python setup_chaos.py --chaos TerminateEC2InstanceWithGracePeriod | |||
--batch-size-to-kill 10 --max-to-kill 100 --kill-delay 120 | |||
|
|||
- name: text_embeddings_benchmark_5x_{{scaling}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment for linking
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Signed-off-by: Ricardo Decal <[email protected]>
Why are these changes needed?
Upstreams the text embeddings benchmarks to fill out dashboards
Related issue number
NA
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.