-
Notifications
You must be signed in to change notification settings - Fork 10
Open
Description
To support evaluation of models with interleaved thinking, the input message should preserve reasoning_content from previous turns to maintain reasoning consistency.
The relevant code changes are as follows:
- Enable thinking when calling the model
- Support returning
reasoning_contentinllm.pyandschema.py
mcp-atlas/services/mcp_eval/mcp_completion/llm.py
Lines 75 to 79 in 867003a
| assistant_message = AssistantMessage( | |
| role="assistant", | |
| content=response.choices[0].message.content, | |
| tool_calls=tool_calls, | |
| ) |
mcp-atlas/services/mcp_eval/mcp_completion/schema.py
Lines 36 to 41 in 867003a
| class AssistantMessage(BaseModel): | |
| """Assistant message.""" | |
| role: Literal["assistant"] | |
| content: Optional[str] = None | |
| tool_calls: Optional[List[ToolCall]] = None |
Modified to:
assistant_message = AssistantMessage(
role="assistant",
content=response.choices[0].message.content,
tool_calls=tool_calls,
reasoning_content=response.choices[0].message.reasoning_content,
)
class AssistantMessage(BaseModel):
"""Assistant message."""
role: Literal["assistant"]
content: Optional[str] = None
tool_calls: Optional[List[ToolCall]] = None
reasoning_content: Optional[str] = None
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels