Skip to content

Fixes the Gemini integration example in the README#2537

Merged
danielaskdd merged 1 commit intoHKUDS:mainfrom
vishvaRam:patch-1
Dec 26, 2025
Merged

Fixes the Gemini integration example in the README#2537
danielaskdd merged 1 commit intoHKUDS:mainfrom
vishvaRam:patch-1

Conversation

@vishvaRam
Copy link
Contributor

Description

This update fixes several critical issues in the Gemini integration example:

  1. Corrected import: Changed from gemini_complete to gemini_model_complete
    (the correct function name per lightrag/llm/gemini.py)

  2. Fixed parameter name: Changed 'model' to 'model_name' in gemini_model_complete() call to match the function signature

  3. Added llm_model_name to LightRAG initialization: This is required for gemini_model_complete to retrieve the model name from hashing_kv.global_config

  4. Updated to latest model: gemini-1.5-flash → gemini-2.0-flash

Without these changes, users get "404 NOT_FOUND" errors as the code defaults to gpt-4o-mini when model_name is not properly configured.

Tested and verified working with Gemini 2.0 Flash API.

Related Issues

Fixes incorrect example code that causes user confusion and setup failures.
This addresses a documentation bug rather than a code functionality issue.

Changes Made

  • Changed import: gemini_completegemini_model_complete
  • Fixed parameter: model=model_name= in function call
  • Added llm_model_name="gemini-2.0-flash" to LightRAG initialization
  • Updated model version: gemini-1.5-flashgemini-2.0-flash

Files Modified

  • README.md - Gemini integration example section

Checklist

  • Changes tested locally
  • Code reviewed (self-reviewed against source)
  • Documentation updated (this IS documentation)
  • Unit tests added (N/A - documentation only)

Additional Notes

Why These Changes Matter

The current example in README doesn't match the actual library implementation.
Looking at lightrag/llm/gemini.py lines 438-448, gemini_model_complete()
requires the model name to be configured via:

  1. hashing_kv.global_config.get("llm_model_name") (checked first)
  2. kwargs.pop("model_name", None) (fallback)

Without proper configuration, it defaults to gpt-4o-mini, causing 404 errors.

Testing Environment

  • Gemini 2.0 Flash API
  • Document size: 53KB
  • Successfully extracted: 125 entities, 84 relations
  • All query modes (naive, local, global, hybrid) verified working

Reference

Implementation details: lightrag/llm/gemini.py lines 438-448

This update fixes several critical issues in the Gemini integration example:

1. Corrected import: Changed from gemini_complete to gemini_model_complete 
   (the correct function name per lightrag/llm/gemini.py)

2. Fixed parameter name: Changed 'model' to 'model_name' in gemini_model_complete()
   call to match the function signature

3. Added llm_model_name to LightRAG initialization: This is required for 
   gemini_model_complete to retrieve the model name from hashing_kv.global_config

4. Updated to latest model: gemini-1.5-flash → gemini-2.0-flash

Without these changes, users get "404 NOT_FOUND" errors as the code defaults 
to gpt-4o-mini when model_name is not properly configured.

Tested and verified working with Gemini 2.0 Flash API.
@danielaskdd danielaskdd merged commit d97ec95 into HKUDS:main Dec 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants