Skip to content

fix: retry when model returns empty response after tool execution#4982

Open
akashbangad wants to merge 5 commits intogoogle:mainfrom
akashbangad:fix/empty-model-response-retry
Open

fix: retry when model returns empty response after tool execution#4982
akashbangad wants to merge 5 commits intogoogle:mainfrom
akashbangad:fix/empty-model-response-retry

Conversation

@akashbangad
Copy link
Copy Markdown
Contributor

Bug

Some models (notably Claude, and some Gemini preview models) return an empty content array (parts: []) after processing tool results. ADK's is_final_response() treats this as a valid completed turn because it only checks for the absence of function calls — not the presence of actual content. The agent loop stops and the user sees nothing.

Observed with:

  • Claude (Opus/Sonnet/Haiku) via AnthropicLlm — after run_shell, computer_use tool results
  • Gemini preview models — after tool execution with streaming enabled

Example session history showing the bug:

Event 19: agent calls run_shell({"command": "cloudflared --version"})
Event 20: tool responds: {"output": "cloudflared version 2026.3.0", "exit_code": 0}
Event 21: agent responds with parts: [] ← EMPTY, agent loop ends, user sees nothing

Root Cause

In BaseLlmFlow.run_async() (line 757):

if not last_event or last_event.is_final_response() or last_event.partial:
    break

And is_final_response() in event.py:

return (
    not self.get_function_calls()
    and not self.get_function_responses()
    and not self.partial
    and not self.has_trailing_code_execution_result()
)

An event with parts: [] passes all these checks — no function calls, no function responses, not partial — so is_final_response() returns True and the loop breaks.

Fix

Added a retry mechanism in BaseLlmFlow.run_async():

  1. _has_meaningful_content(event) — helper that checks if an event actually contains content worth showing (non-empty text, function calls, inline data, etc.)
  2. When is_final_response() is True but the event has no meaningful content, the loop continues instead of breaking, re-prompting the model
  3. A maximum retry count (_MAX_EMPTY_RESPONSE_RETRIES = 2) prevents infinite loops if the model keeps returning empty responses

Tests

Added 10 new tests in test_empty_response_retry.py:

_has_meaningful_content tests (7):

  • test_no_contentNone content → not meaningful
  • test_empty_partsparts: [] → not meaningful
  • test_only_empty_text_parttext="" → not meaningful
  • test_only_whitespace_text_parttext=" \n " → not meaningful
  • test_non_empty_text — actual text → meaningful
  • test_function_call — function call → meaningful
  • test_function_response — function response → meaningful

Integration tests (3):

  • test_empty_response_retried_then_succeeds — empty response triggers retry, second call succeeds
  • test_empty_response_stops_after_max_retries — stops after max retries to prevent infinite loop
  • test_non_empty_response_not_retried — normal responses are not retried

All 10 tests pass. All 356 pre-existing flows/llm_flows/ tests pass.

pytest tests/unittests/flows/llm_flows/test_empty_response_retry.py -v

Closes #3754
Related: #3467, #4090, #3034

🤖 Generated with Claude Code

Some models (notably Claude, and some Gemini preview models) occasionally
return an empty content array (parts: []) after processing tool results.
ADK's is_final_response() treats this as a valid completed turn because
it only checks for the absence of function calls — not the presence of
actual content. The agent loop stops and the user sees nothing.

This adds a retry mechanism in BaseLlmFlow.run_async() that detects
empty/meaningless final responses and re-prompts the model, up to a
configurable maximum (default 2 retries) to prevent infinite loops.

Closes google#3754
Related: google#3467, google#4090, google#3034
@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Mar 24, 2026
@rohityan rohityan self-assigned this Mar 24, 2026
@akashbangad
Copy link
Copy Markdown
Contributor Author

@rohityan Can we get this PR reviewed ?

@rohityan
Copy link
Copy Markdown
Collaborator

Hi @akashbangad , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Can you please fix the failing unit tests

@rohityan rohityan added the request clarification [Status] The maintainer need clarification or more information from the author label Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation request clarification [Status] The maintainer need clarification or more information from the author

Projects

None yet

Development

Successfully merging this pull request may close these issues.

streaming=True in /run_sse returns empty text after AgentTool calls (works withstreaming=False)

3 participants