Skip to content

Tool calls with empty/null arguments silently dropped (affects vLLM, Ollama, and other OpenAI-compatible providers) #4059

@alpsencer

Description

@alpsencer

pipecat version

Latest main branch (also verified in 0.0.76+)

Python version

3.12

Operating System

Linux (CI) / macOS

Issue description

When an LLM returns a streaming tool call with no arguments (arguments=null in the delta chunks), the tool call is silently dropped — no error, no warning. The tool handler is never invoked.

This happens in BaseOpenAILLMService._process_context (base_llm.py):

  1. Line 544: if tool_call.function and tool_call.function.arguments: — when arguments is None, this check is falsy, so nothing is accumulated. The local arguments variable stays as "".

  2. Line 561: if function_name and arguments: — the empty string "" is falsy, so the entire tool call execution block is skipped. run_function_calls is never called.

OpenAI masks this bug because it always sends arguments: "{}" (empty JSON object string) even for tools with zero parameters. But other OpenAI-compatible providers behave differently:

  • vLLM (with Qwen, Llama, Mistral, etc.) omits arguments entirely when the tool schema has no required properties
  • Ollama and other local providers may exhibit the same behavior

This is a significant issue because:

  • Tools with no required parameters (e.g., end_call, hang_up, get_status) are common
  • The failure is completely silent — no error logged, no exception raised
  • It's hard to debug because the LLM does correctly identify and call the tool; pipecat just ignores it

Reproduction steps

  1. Serve any model via vLLM with --enable-auto-tool-choice --tool-call-parser hermes
  2. Register a tool with no required parameters:
    tools = [{"type": "function", "function": {"name": "end_call", "description": "End the call", "parameters": {"type": "object", "properties": {}, "required": []}}}]
  3. Send a prompt that triggers the tool call (e.g., "End the call")
  4. The streaming response includes tool_calls with function.name = "end_call" and finish_reason = "tool_calls", but no arguments chunks
  5. The tool handler is never invoked

Expected behavior

Tool calls with no arguments should be treated as having arguments = {} (empty dict) and the registered handler should be invoked.

Actual behavior

The tool call is silently dropped. run_function_calls is never called. No error or warning is logged.

Suggested fix

Three lines in base_llm.py:

Line Before After
535 arguments_list.append(arguments) arguments_list.append(arguments or "{}")
561 if function_name and arguments: if function_name:
564 arguments_list.append(arguments) arguments_list.append(arguments or "{}")

This defaults empty arguments to "{}" so json.loads produces an empty dict. No behavior change for OpenAI (which already sends "{}"). Fixes all OpenAI-compatible providers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions