-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Description
What happened?
Bug Description
When using Bedrock models via LiteLLM proxy with both tools and response_format parameters, LiteLLM converts the response_format into a fake tool called json_tool_call. Bedrock then returns this fake tool alongside real tool calls, which breaks the OpenAI Agents SDK.
Error: Tool json_tool_call not found in agent <AgentName>
Expected Behavior
When tools and response_format are used together:
- Real tools should be called normally
- Structured output should be returned in the final response
- No fake
json_tool_calltool should appear in the response
This is how OpenAI models behave natively.
Actual Behavior
LiteLLM converts response_format → fake tool json_tool_call, and Bedrock returns both:
"tool_calls": [
{
"function": {
"name": "json_tool_call", // ❌ FAKE (added by LiteLLM)
"arguments": "{\"message\": \"...\", \"location_ids\": [...]}"
}
},
{
"function": {
"name": "get_weather_forecast", // ✅ REAL
"arguments": "{\"location\": \"New York\"}"
}
}
]The OpenAI Agents SDK doesn't recognize json_tool_call and crashes.
Test Results
| Scenario | Bedrock | OpenAI |
|---|---|---|
| Tools only | ✅ Works | ✅ Works |
| Structured output only | ✅ Works | ✅ Works |
| Both together | ❌ Fails | ✅ Works |
Reproduction
I've created a complete reproduction repository with detailed analysis:
Repository: https://github.com/haggaishachar/openai-agents-bedrock-litellm
Quick Reproduction Steps
- Clone the repo
- Install dependencies:
make install - Set environment variables (AWS credentials + OpenAI API key)
- Start LiteLLM proxy:
make run-litellm-proxy - Run test:
make run
The test runs identical code against both OpenAI and Bedrock models - only the model name differs. OpenAI passes, Bedrock fails.
Minimal Code Example
from agents import Agent, Runner, function_tool
from openai import AsyncOpenAI
from pydantic import BaseModel
class WeatherOutput(BaseModel):
message: str
location_ids: list[str]
@function_tool
def get_weather_forecast(location: str = "New York") -> str:
return f"Weather for {location}: Sunny, 72°F"
agent = Agent(
name="Weather Assistant",
model="bedrock-assistant", # Via LiteLLM proxy
tools=[get_weather_forecast],
output_type=WeatherOutput, # Structured output
)
# This crashes with: Tool json_tool_call not found
result = await Runner.run(agent, "What's the weather in New York?")Root Cause
Bedrock doesn't natively support OpenAI's response_format parameter. LiteLLM emulates it by converting the JSON schema into a fake tool called json_tool_call. This works when used alone, but conflicts with real tools.
Environment
- LiteLLM Version: v1.80.11
- OpenAI Agents SDK: v0.0.19
- Model: bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0
- Python: 3.12
- OS: macOS 14.6
Related Issues
- Related discussion: Is LiteLLM compatible with OpenAI agents SDK? #9170 (OpenAI Agents SDK compatibility)
- Similar issue with different LLM provider: Tool Call doesn't work when using Structured Outputs w/ kimi-k2-instruct openai/openai-agents-python#1778
Relevant log output
✗ make rca
Installing dependencies with uv...
uv sync
Resolved 116 packages in 0.92ms
Audited 103 packages in 0.12ms
✅ All dependencies installed successfully
Checking if LiteLLM proxy is running...
✅ Proxy is running
Running minimal RCA demonstration...
================================================================================
BEDROCK: Testing with BOTH tools AND response_format
================================================================================
Sending request to Bedrock...
❌ ISSUE: Bedrock returned 2 tool call(s):
- json_tool_call
⚠️ This is the FAKE tool added by LiteLLM!
- get_weather_forecast
================================================================================
OPENAI: Testing with BOTH tools AND response_format (for comparison)
================================================================================
✅ OpenAI returned 1 tool call(s):
- get_weather_forecast
================================================================================
CONCLUSION
================================================================================
Bedrock: Returns 2 tool call(s): 'json_tool_call', 'get_weather_forecast'
⚠️ Includes 'json_tool_call' (FAKE tool added by LiteLLM)
OpenAI: Returns 1 tool call(s): 'get_weather_forecast'
The fake 'json_tool_call' is created by LiteLLM when it converts the
response_format parameter into a tool for Bedrock compatibility.
This breaks the OpenAI Agents SDK because it doesn't recognize 'json_tool_call'.What part of LiteLLM is this about?
Proxy
What LiteLLM version are you on ?
1.78.5, 1.80.11