-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
pipecat version
Latest main branch (also verified in 0.0.76+)
Python version
3.12
Operating System
Linux (CI) / macOS
Issue description
When an LLM returns a streaming tool call with no arguments (arguments=null in the delta chunks), the tool call is silently dropped — no error, no warning. The tool handler is never invoked.
This happens in BaseOpenAILLMService._process_context (base_llm.py):
-
Line 544:
if tool_call.function and tool_call.function.arguments:— whenargumentsisNone, this check is falsy, so nothing is accumulated. The localargumentsvariable stays as"". -
Line 561:
if function_name and arguments:— the empty string""is falsy, so the entire tool call execution block is skipped.run_function_callsis never called.
OpenAI masks this bug because it always sends arguments: "{}" (empty JSON object string) even for tools with zero parameters. But other OpenAI-compatible providers behave differently:
- vLLM (with Qwen, Llama, Mistral, etc.) omits
argumentsentirely when the tool schema has no required properties - Ollama and other local providers may exhibit the same behavior
This is a significant issue because:
- Tools with no required parameters (e.g.,
end_call,hang_up,get_status) are common - The failure is completely silent — no error logged, no exception raised
- It's hard to debug because the LLM does correctly identify and call the tool; pipecat just ignores it
Reproduction steps
- Serve any model via vLLM with
--enable-auto-tool-choice --tool-call-parser hermes - Register a tool with no required parameters:
tools = [{"type": "function", "function": {"name": "end_call", "description": "End the call", "parameters": {"type": "object", "properties": {}, "required": []}}}]
- Send a prompt that triggers the tool call (e.g., "End the call")
- The streaming response includes
tool_callswithfunction.name = "end_call"andfinish_reason = "tool_calls", but no arguments chunks - The tool handler is never invoked
Expected behavior
Tool calls with no arguments should be treated as having arguments = {} (empty dict) and the registered handler should be invoked.
Actual behavior
The tool call is silently dropped. run_function_calls is never called. No error or warning is logged.
Suggested fix
Three lines in base_llm.py:
| Line | Before | After |
|---|---|---|
| 535 | arguments_list.append(arguments) |
arguments_list.append(arguments or "{}") |
| 561 | if function_name and arguments: |
if function_name: |
| 564 | arguments_list.append(arguments) |
arguments_list.append(arguments or "{}") |
This defaults empty arguments to "{}" so json.loads produces an empty dict. No behavior change for OpenAI (which already sends "{}"). Fixes all OpenAI-compatible providers.