Refact: Separate Configuration of RAGAS for LLM and Embeddings#2314
Merged
danielaskdd merged 2 commits intoHKUDS:mainfrom Nov 5, 2025
Merged
Refact: Separate Configuration of RAGAS for LLM and Embeddings#2314danielaskdd merged 2 commits intoHKUDS:mainfrom
danielaskdd merged 2 commits intoHKUDS:mainfrom
Conversation
• Add warning filter for token usage • Support vLLM, SGLang endpoints • Non-critical for RAGAS evaluation
- Split LLM and embedding API configs - Add fallback chain for API keys - Update docs with usage examples
Collaborator
Author
|
@codex review |
|
Codex Review: Didn't find any major issues. Hooray! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📊 Separate Configuration of RAGAS for LLM and Embeddings
Overview
This PR enhances the RAGAS evaluation system with more flexible configuration options, improved user experience, and better documentation. The changes primarily focus on supporting custom OpenAI-compatible endpoints for both LLM and embedding models while improving the evaluation workflow.
Key Features
🔧 Separate Endpoint Configuration for LLM and Embeddings
EVAL_EMBEDDING_BINDING_API_KEY: Dedicated API key for embedding modelsEVAL_EMBEDDING_BINDING_HOST: Dedicated endpoint URL for embedding modelsEVAL_EMBEDDING_BINDING_API_KEY→EVAL_LLM_BINDING_API_KEY→OPENAI_API_KEYEVAL_EMBEDDING_BINDING_HOST→EVAL_LLM_BINDING_HOST→ None🔇 Suppress Non-Critical Warnings
📈 Enhanced Progress Display
📚 Documentation Improvements
Changes Summary
lightrag/evaluation/eval_rag_quality.py: Core evaluation improvements (97 lines changed)lightrag/evaluation/README.md: Documentation refactor (186 lines changed)env.example: New configuration examples (18 lines changed)README.md&README-zh.md: News updatesBenefits
Testing
Breaking Changes
None - all changes are backward compatible with existing configurations.