-
Notifications
You must be signed in to change notification settings - Fork 2.8k
update @google/genai dependency, add thoughtSignature support and fix content extraction #16664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I did not get the above mentioned errors anymore, but it hangs for longer runs. Longer runsI get these warnings: 2025-11-26T22:39:02.138Z root WARN there are non-text parts functionCall in the response, returning concatenation of all text parts. Please refer to the non text parts for a full response from model. 2025-11-26T22:38:56.893Z root WARN there are non-text parts functionCall,thoughtSignature in the response, returning concatenation of all text parts. Please refer to the non text parts for a full response from model. Test case for long runs: "Add a button to my token usage view to reset the token count" Short runsFirst message works, but on the second I get: Test case for short runs:
|
|
Hi again @JonasHelming, thanks for testing the first attempt! In general, the Gemini models feel slower to me compared to others, but I haven't used them very often in the past, so I don’t have a solid basis for comparison. |
|
I will test this afternoon. First try got the error below (which might be unrelated to us) {"error":{"message":"{\n "error": {\n "code": 500,\n "message": "An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting\",\n "status": "INTERNAL"\n }\n}\n","code":500,"status":"Internal Server Error"}} |
|
@coder add a button to the token usage view to reset the token count With this prompt it still hangs and then I get the two warnings in the console: Please refer to the non text parts for a full response from model. I also have the feeling it takes very long, maybe it retries or thinks too much? The simple "@coder Add a new lint to the readme "fooba"
I somehow have the feeling that there is smething wrong with the tool response, for example this would explain why it applies the change three times (it does not get feedback)? |
|
Review request was opened and already approved for the updated dependency: [main] INFO A review request was created https://gitlab.eclipse.org/eclipsefdn/emo-team/iplab/-/issues/25016 . |
|
I have the same issue as @JonasHelming. When I send a non-trivial request to Gemini 3 with When using Gemini 2, I easily run into errors like this: So sadly I don't think it really improves on the current state |
|
Thanks for testing! |
|
Next variant! But the default on master is now using the next variant, we made the new functions default |
|
Ah I see, I didn't reset my variant in my runtime yet, that would explain why it worked for me. |
|
I don't think it's really Coder prompt related. The Gemini 2 models seem rather lazy to me. I invoked |
|
What I can observe also: if I change the |
|
Summary of my findings so far: I made a small rework to integrate thought summaries (temporarily), merge part text fields, and enable thought summaries for more insights. Also with this changes the warnings about I'm still running into several issues with my Gemini model implementation:
@eneufeld @sdirix PS: @JonasHelming I think there is no immediate need to retest now, we can ping you again if there is more progress. |
|
Hi again, @eneufeld, @JonasHelming, @sdirix |
eneufeld
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change looks good to me. I tested it with TheiaDev and TheiaDevCoder. This worked fine.
sdirix
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works great for me. I left some comments. Feel free to tackle them if you want, but none of them is a blocker for merge.
| thinkingConfig: { | ||
| // https://ai.google.dev/gemini-api/docs/thinking#summaries | ||
| includeThoughts: true, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we set thinking on "on" on purpose? It's a bit cumbersome to turn it off again
"ai-features.modelSettings.requestSettings": [
{
"scope": {
"providerId": "google"
},
"requestSettings": {
"thinkingConfig": {
"includeThoughts": false
}
}
}
]
On the other hand, turning it on would be equally as difficult, so I am fine either way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I mainly added it to get to know the model better, but I agree it might not be for everyone. I'll create a follow-up to make it configurable via the UI (see GH-16642)
Resolves GH-16640 Following the @google/genai 1.30.0 SDK update, this commit adds proper support for Gemini's thinking feature and improves response handling. - Enable thinking mode via thinkingConfig.includeThoughts for streaming - Preserve thoughtSignature when converting tool calls to maintain conversation context across tool use cycles - Extract text content from response parts instead of using chunk.text to correctly distinguish between thinking and regular content - Fix functionResponse format to match Gemini API requirements name + response object, without id field) - Add data field to ToolCall/ToolCallChatResponseContent for passing provider-specific metadata like thoughtSignature - Handle MALFORMED_FUNCTION_CALL as warning instead of error - Wrap non-object tool results in { result: ... } for Gemini compliance
|
Thanks for the reviews! |
| import { GoogleGenAI, FunctionCallingConfigMode, FunctionDeclaration, Content, Schema, Part, Modality, FunctionResponse, ToolConfig } from '@google/genai'; | ||
| import { wait } from '@theia/core/lib/common/promise-util'; | ||
| import { GoogleLanguageModelRetrySettings } from './google-language-models-manager-impl'; | ||
| import { UUID } from '@theia/core/shared/@lumino/coreutils'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant generateUuid from @theia/core ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, thanks! I take a note that for the follow up then :)
What it does
Resolves GH-16640
Update @google/genai dependency to 1.30.0
Following the @google/genai 1.30.0 SDK update, this commit adds proper support for Gemini's thinking feature and improves response handling.
How to test
Follow-ups
Breaking changes
Attribution
Review checklist
nlsservice (for details, please see the Internationalization/Localization section in the Coding Guidelines)Reminder for reviewers