-
Notifications
You must be signed in to change notification settings - Fork 824
feature: xAI support. #271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @sng-asyncfunc, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for xAI's Grok models, enabling the application to interact with the xAI API for chat completions. It involves adding a dedicated client, updating core configuration files, extending the WebSocket message handler for real-time interactions, and including necessary dependencies and localization strings.
Highlights
- New xAI Client Integration: I've introduced a new
XAIClient
class to support interactions with the xAI API, specifically for their Grok models. This client handles API key management, input conversion, and response parsing, adhering to theadalflow
ModelClient
interface. - Configuration Updates: The application's configuration (
api/config.py
andapi/config/generator.json
) has been updated to recognize 'xai' as a new provider. This includes mapping the 'xai' provider ID to the newXAIClient
and defining default models (e.g.,grok-4-0709
,grok-3
) along with their default parameters. - WebSocket Handler Extension: The
handle_websocket_chat
function inapi/websocket_wiki.py
now includes logic to process requests for the 'xai' provider. This involves initializing theXAIClient
, preparing API arguments, and handling streaming responses from the xAI API, including specific error messages for missing API keys. - Dependency Addition: The
xai-sdk
library has been added toapi/requirements.txt
as a new dependency, which is necessary for theXAIClient
to communicate with the xAI API. - Localization Support: The string 'xAI' has been added to various localization files (
src/messages/*.json
) to ensure proper display of the new provider name across different languages in the user interface.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the xAI client, which is a great addition. The changes are well-structured and follow the existing patterns for adding new providers. However, I've found some critical issues in the xai_client.py
implementation, particularly with how streaming responses are handled. The current implementation doesn't provide a true streaming experience. I've also noted a critical bug in the websocket handler where a required parameter is missing, and a minor formatting issue in one of the JSON files. Addressing these points will ensure the new provider works as expected and maintains code quality.
def call(self, api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED): | ||
""" | ||
Make a synchronous call to the xAI API. | ||
""" | ||
log.info(f"api_kwargs: {api_kwargs}") | ||
self._api_kwargs = api_kwargs | ||
|
||
if model_type == ModelType.LLM: | ||
# Check if client is properly initialized | ||
if not self.sync_client: | ||
raise ValueError("XAI client not properly initialized. Please set XAI_API_KEY environment variable.") | ||
|
||
try: | ||
from xai_sdk.chat import user, system | ||
|
||
# Create a new chat instance | ||
chat = self.sync_client.chat.create( | ||
model=api_kwargs.get("model", "grok-4-0709"), | ||
temperature=api_kwargs.get("temperature", 0.7) | ||
) | ||
|
||
# Add messages to the chat | ||
messages = api_kwargs.get("messages", []) | ||
for message in messages: | ||
role = message.get("role", "user") | ||
content = message.get("content", "") | ||
|
||
if role == "system": | ||
chat.append(system(content)) | ||
else: # user or assistant | ||
chat.append(user(content)) | ||
|
||
# Get the response | ||
response = chat.sample() | ||
|
||
# Handle streaming if requested | ||
if api_kwargs.get("stream", False): | ||
# For streaming, we'll simulate by yielding the content | ||
async def async_stream_generator(): | ||
yield response.content | ||
return async_stream_generator() | ||
else: | ||
return response | ||
|
||
except Exception as e: | ||
log.error(f"Error in xAI API call: {e}") | ||
raise | ||
else: | ||
raise ValueError(f"model_type {model_type} is not supported by XAIClient") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call
method is synchronous but attempts to return an async
generator when streaming is enabled. This will raise a TypeError
at runtime. Furthermore, the streaming implementation is incorrect as it fetches the full response using chat.sample()
before yielding it in a single chunk, which defeats the purpose of streaming.
The method should return a synchronous generator for streaming, using chat.stream_sample()
.
def call(self, api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED):
"""
Make a synchronous call to the xAI API.
"""
log.info(f"api_kwargs: {api_kwargs}")
self._api_kwargs = api_kwargs
if model_type == ModelType.LLM:
if not self.sync_client:
raise ValueError("XAI client not properly initialized. Please set XAI_API_KEY environment variable.")
try:
from xai_sdk.chat import user, system
chat = self.sync_client.chat.create(
model=api_kwargs.get("model", "grok-4-0709"),
temperature=api_kwargs.get("temperature", 0.7)
)
messages = api_kwargs.get("messages", [])
for message in messages:
role = message.get("role", "user")
content = message.get("content", "")
if role == "system":
chat.append(system(content))
else: # user or assistant
chat.append(user(content))
if api_kwargs.get("stream", False):
def stream_generator():
for chunk in chat.stream_sample():
yield str(chunk)
return stream_generator()
else:
return chat.sample()
except Exception as e:
log.error(f"Error in xAI API call: {e}")
raise
else:
raise ValueError(f"model_type {model_type} is not supported by XAIClient")
async def acall( | ||
self, api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED | ||
): | ||
""" | ||
Make an asynchronous call to the xAI API. | ||
""" | ||
# Check if client is properly initialized | ||
if not self.sync_client: | ||
raise ValueError("XAI client not properly initialized. Please set XAI_API_KEY environment variable.") | ||
|
||
if model_type == ModelType.LLM: | ||
try: | ||
from xai_sdk.chat import user, system | ||
|
||
# Create a new chat instance in a thread pool | ||
loop = asyncio.get_event_loop() | ||
|
||
def create_chat_and_get_response(): | ||
chat = self.sync_client.chat.create( | ||
model=api_kwargs.get("model", "grok-4-0709"), | ||
temperature=api_kwargs.get("temperature", 0.7) | ||
) | ||
|
||
# Add messages to the chat | ||
messages = api_kwargs.get("messages", []) | ||
for message in messages: | ||
role = message.get("role", "user") | ||
content = message.get("content", "") | ||
|
||
if role == "system": | ||
chat.append(system(content)) | ||
else: # user or assistant | ||
chat.append(user(content)) | ||
|
||
# Get the response | ||
return chat.sample() | ||
|
||
response = await loop.run_in_executor(None, create_chat_and_get_response) | ||
|
||
# Handle streaming if requested | ||
if api_kwargs.get("stream", False): | ||
# For streaming, we'll simulate by yielding the content | ||
async def async_stream_generator(): | ||
yield response.content | ||
return async_stream_generator() | ||
else: | ||
return response | ||
|
||
except Exception as e: | ||
log.error(f"Error in xAI API call: {e}") | ||
raise | ||
else: | ||
raise ValueError(f"model_type {model_type} is not supported by XAIClient") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of acall
for streaming is incorrect. It fetches the entire response using chat.sample()
and then yields the whole content as a single chunk. This defeats the purpose of streaming and will result in a poor user experience.
The xai-sdk
's streaming method chat.stream_sample()
should be used for true streaming. Since it's a synchronous generator, it needs to be handled carefully in an async
context to avoid blocking the event loop. Here is a corrected implementation.
async def acall(
self, api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED
):
"""
Make an asynchronous call to the xAI API.
"""
if not self.sync_client:
raise ValueError("XAI client not properly initialized. Please set XAI_API_KEY environment variable.")
if model_type != ModelType.LLM:
raise ValueError(f"model_type {model_type} is not supported by XAIClient")
loop = asyncio.get_event_loop()
def _make_api_call():
from xai_sdk.chat import user, system
chat = self.sync_client.chat.create(
model=api_kwargs.get("model", "grok-4-0709"),
temperature=api_kwargs.get("temperature", 0.7)
)
messages = api_kwargs.get("messages", [])
for message in messages:
role = message.get("role", "user")
content = message.get("content", "")
if role == "system":
chat.append(system(content))
else:
chat.append(user(content))
if api_kwargs.get("stream", False):
return chat.stream_sample()
else:
return chat.sample()
try:
response_or_generator = await loop.run_in_executor(None, _make_api_call)
if api_kwargs.get("stream", False):
async def async_stream_generator():
for chunk in response_or_generator:
yield str(chunk)
await asyncio.sleep(0) # Yield control to the event loop
return async_stream_generator()
else:
return response_or_generator
except Exception as e:
log.error(f"Error in xAI API call: {e}")
raise
def handle_streaming_response(response): | ||
"""Handle streaming response from xAI API.""" | ||
try: | ||
# For streaming responses, we need to collect all chunks | ||
collected_content = "" | ||
for chunk in response: | ||
if hasattr(chunk, 'content') and chunk.content: | ||
collected_content += chunk.content | ||
yield chunk.content | ||
|
||
# Return the final collected content | ||
if collected_content: | ||
return collected_content | ||
except Exception as e: | ||
log.error(f"Error handling streaming response: {e}") | ||
yield f"Error: {str(e)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if self._input_type == "messages": | ||
if isinstance(input, list): | ||
messages = input | ||
else: | ||
messages = [{"role": "user", "content": str(input)}] | ||
else: | ||
# Convert text input to messages format | ||
if isinstance(input, str): | ||
messages = [{"role": "user", "content": input}] | ||
else: | ||
messages = [{"role": "user", "content": str(input)}] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to convert input to the messages
format can be simplified. The branches for self._input_type == "messages"
(when input is not a list) and the else
branch are performing the same operation. Consolidating them will make the code more concise and easier to maintain.
if self._input_type == "messages" and isinstance(input, list):
messages = input
else:
# Handles text input or non-list messages input by wrapping in user role
messages = [{"role": "user", "content": str(input)}]
@@ -45,6 +45,7 @@ | |||
"providerOpenAI": "OpenAI", | |||
"providerOpenRouter": "OpenRouter", | |||
"providerOllama": "Ollama (로컬)", | |||
"providerXai": "xAI", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Support Grok/xAI Client.
Implementing this because grok4 seems pretty impressive/heavy/bold !