Skip to content

HotFix: Restore OpenAI Streaming Response & Refactor keyword_extraction Parameter#2334

Merged
danielaskdd merged 2 commits intoHKUDS:mainfrom
danielaskdd:hotfix-opena-streaming
Nov 9, 2025
Merged

HotFix: Restore OpenAI Streaming Response & Refactor keyword_extraction Parameter#2334
danielaskdd merged 2 commits intoHKUDS:mainfrom
danielaskdd:hotfix-opena-streaming

Conversation

@danielaskdd
Copy link
Collaborator

🐛 HotFix: Restore OpenAI Streaming Response & Refactor keyword_extraction Parameter

Problem

The OpenAI LLM integration had a critical bug where streaming responses were not working due to stream and timeout parameters being removed from kwargs but not explicitly passed to the OpenAI API.

Solution

This PR addresses two issues:

1. Restored Streaming Functionality (Commit: 88ab73f)

  • Fixed streaming response by explicitly passing stream and timeout parameters to OpenAI API
  • Added proper parameter handling in openai_complete_if_cache to ensure these values reach the API

2. Refactored keyword_extraction Parameter (Commit: 2f16065)

  • Moved keyword_extraction from **kwargs to an explicit function parameter
  • Improved code clarity and type safety across all OpenAI completion functions
  • Affected functions:
    • openai_complete_if_cache
    • openai_complete
    • gpt_4o_complete
    • gpt_4o_mini_complete
    • nvidia_openai_complete

Files Changed

  • lightrag/llm/openai.py - Core implementation fixes
  • README.md & README-zh.md - Updated documentation
  • lightrag/api/README.md & lightrag/api/README-zh.md - API documentation updates

Impact

  • ✅ Streaming responses now work correctly with OpenAI-compatible endpoints
  • ✅ Better parameter handling and code maintainability
  • ✅ No breaking changes to existing API

Testing

Should test:

  • Streaming responses work with OpenAI API
  • Streaming responses work with OpenAI-compatible endpoints
  • Keyword extraction functionality remains intact
  • All completion functions handle parameters correctly

The stream and timeout parameters were moved from **kwargs to explicit
parameters in a previous commit, but were not being passed to the OpenAI
API, causing streaming responses to fail and fall back to non-streaming
mode.Fixes the issue where stream=True was being silently ignored, resulting
in unexpected non-streaming behavior.
• Add keyword_extraction param to functions
• Remove kwargs.pop() calls
• Update function signatures
• Improve parameter documentation
• Make parameter handling consistent
@danielaskdd
Copy link
Collaborator Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Already looking forward to the next diff.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@danielaskdd danielaskdd merged commit 8859eaa into HKUDS:main Nov 9, 2025
1 check passed
@danielaskdd danielaskdd deleted the hotfix-opena-streaming branch November 9, 2025 06:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant