Skip to content

Conversation

@janpawellek
Copy link
Contributor

The LLM integration HuggingFaceTextGenInference already has streaming support.

However, when streaming is enabled, it always returns an empty string as the final output text when the LLM is finished. This is because text is instantiated with an empty string and never updated.

This PR fixes the collection of the final output text by concatenating new tokens.

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

@hwchase17 hwchase17 merged commit ea6a5b0 into langchain-ai:master Jun 19, 2023
This was referenced Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants