-
Notifications
You must be signed in to change notification settings - Fork 744
Description
Is your feature request related to a problem? Please describe.
When an LLM-based agent calls multiple tools in parallel, it must make a second LLM call to aggregate their results. If the application also requires post-processing (output validation, formatting, or auditing), this results in two extra LLM calls after tool execution:
User: "Turn off lights and play music"
↓
Agent LLM Call #1 (decides which tools to call)
↓
Parallel execution: DeviceControlTool + MusicControlTool
↓
Agent LLM Call #2 (aggregates results) ← Extra LLM call!
↓
ResultProcessor / Reviewer Agent (validates, formats, audits output) ← Another LLM call!
↓
Final Output
Problems with this approach:
- Increased latency - Users wait 1-3 seconds for unnecessary LLM round-trips
- Higher cost - Every request incurs 2 extra API calls
- Coupled concerns - Agent's instruction becomes complex, mixing routing logic with output formatting
- Post-processing requirements - Many scenarios need output validation, safety checks, or formatting (output validation, format conversion, content audit, quality control)
The aggregation step is often unnecessary - tool results could be sent directly to a post-processor, skipping the agent's second LLM call entirely.
Describe the solution you'd like
Add an optional ResultProcessor field to ToolsConfig that, when configured, routes tool results directly to a processing agent instead of returning them to the parent ChatModel:
User: "Turn off lights and play music"
↓
Agent LLM Call #1 (decides which tools to call)
↓
Parallel execution: DeviceControlTool + MusicControlTool
↓
ResultProcessor Agent (directly processes tool results) ← Just ONE LLM call!
↓
Final Output
Proposed API
// adk/chatmodel.go
type ToolsConfig struct {
compose.ToolsNodeConfig
// Existing fields...
ReturnDirectly map[string]bool
EmitInternalEvents bool
// NEW: Optional result processor
// When set, tool results are sent directly to this agent
// instead of returning to the parent ChatModel
ResultProcessor Agent
}Usage Example
// Create a result processor for output validation
resultProcessor, _ := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
Name: "OutputValidator",
Description: "Validates and formats agent responses",
Instruction: `
You are an output validator. Your task:
1. Review the tool execution results
2. Ensure all user requests were addressed
3. Format the response clearly and concisely
4. Remove any internal system messages
5. Return ONLY the final formatted response
`,
Model: gpt4Mini, // Can use smaller/faster model
})
// Configure agent with result processor
assistant, _ := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
Name: "Assistant",
Model: gpt4o,
ToolsConfig: adk.ToolsConfig{
Tools: []tool.BaseTool{
NewDeviceControlTool(),
NewMusicControlTool(),
},
ResultProcessor: resultProcessor, // ← Set result processor
},
})Benefits
| Benefit | Description |
|---|---|
| Reduced LLM calls | Eliminates the second aggregation call |
| Lower latency | Saves ~500ms-2s per request |
| Cost savings | One fewer API call per request |
| Better separation | Main agent focuses on routing, ResultProcessor on output quality |
| Model flexibility | ResultProcessor can use a smaller/faster model |
| Backward compatible | Optional feature, existing code unaffected |
Describe alternatives you've considered
Alternative 1: ResultProcessor as a regular tool
Add ResultProcessor as a regular AgentTool and instruct the main agent to call it after other tools.
Drawbacks:
- Requires the agent to explicitly call the tool (can be forgotten)
- Adds complexity to agent instructions
- Still requires an LLM call to decide to call the result processor
Alternative 2: DeterministicTransfer
Use AgentWithDeterministicTransferTo to always transfer to a result processor.
Drawbacks:
- ResultProcessor has limited access to original context
- Transfer is one-way, cannot return to the main agent if needed
- Designed for different use case (agent chaining)
Alternative 3: Application-layer post-processing
Handle result processing in application logic after agent execution.
Drawbacks:
- Bypasses agent framework, losing observability
- Requires manual context construction
- Not reusable across different applications
Additional context
Implementation Sketch
Files to modify:
adk/chatmodel.go- AddResultProcessorfield toToolsConfigadk/react.go- ModifynewReact()to add ResultProcessor node when configuredadk/result_processor.go(new) - Helper functions for ResultProcessor execution
React graph changes:
When ResultProcessor is configured, the execution graph changes from:
START → ChatModel → ToolNode → ChatModel → END
↑ │
└───────────┘
To:
START → ChatModel → ToolNode → ResultProcessor → END
↑ │
└───────────────┘
Questions for Discussion
-
Naming: Is
ResultProcessorthe right name? Alternatives:OutputHandler,ResponseAggregator,PostProcessor -
Input format: What should the ResultProcessor receive?
- Option A: Raw tool results only
- Option B: Original user input + tool results
- Option C: Full conversation history
-
Streaming support: Should ResultProcessor support streaming? Or only batch processing?
-
Error handling: What happens when ResultProcessor fails? Fallback to original behavior?
Willing to Contribute
Yes! I'm willing to implement this feature and submit a PR if the design is approved.