Skip to content

feat: add MiniMax as first-class LLM provider#98

Open
octo-patch wants to merge 1 commit intoSakanaAI:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#98
octo-patch wants to merge 1 commit intoSakanaAI:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax M2.7 and M2.7-highspeed as first-class LLM providers across both the main LLM module and the treesearch backend.

Changes

  • ai_scientist/llm.py: Add MiniMax models to AVAILABLE_LLMS, create_client(), get_response_from_llm(), get_batch_responses_from_llm(), and make_llm_call() with temperature clamping (0.0, 1.0] and think-tag stripping
  • ai_scientist/treesearch/backend/init.py: Add MiniMax temperature clamping and think-tag stripping in the backend query router
  • ai_scientist/treesearch/backend/backend_openai.py: Add MiniMax client creation via MINIMAX_API_KEY env var and api.minimax.io base URL
  • README.md: Add MiniMax model documentation and API key setup instructions
  • tests/: Add 31 unit tests and 5 integration tests covering model detection, temperature clamping, think-tag stripping, client creation, and live API calls

MiniMax Models

Model Context Window Description
MiniMax-M2.7 204K tokens Latest MiniMax model
MiniMax-M2.7-highspeed 204K tokens Optimized for speed

Key Implementation Details

  • Uses OpenAI-compatible API via openai.OpenAI(base_url="https://api.minimax.io/v1")
  • Temperature automatically clamped to (0.0, 1.0] range required by MiniMax API
  • think tags stripped from M2.7 reasoning responses
  • No new dependencies required - reuses existing openai SDK

Test Plan

  • 31 unit tests pass (model detection, temperature clamping, think-tag stripping, client creation, mocked API calls)
  • 4 integration tests pass with live MiniMax API (client creation, M2.7 response, M2.7-highspeed response)
  • Existing code paths unaffected - MiniMax branch is additive

Add MiniMax M2.7 and M2.7-highspeed model support via OpenAI-compatible
API across both the main LLM module and the treesearch backend.

Changes:
- ai_scientist/llm.py: Add MiniMax models to AVAILABLE_LLMS, create_client(),
  get_response_from_llm(), get_batch_responses_from_llm(), and make_llm_call()
  with temperature clamping (0.0, 1.0] and think-tag stripping
- ai_scientist/treesearch/backend/__init__.py: Add MiniMax temperature clamping
  and think-tag stripping in the backend query router
- ai_scientist/treesearch/backend/backend_openai.py: Add MiniMax client creation
  with MINIMAX_API_KEY and api.minimax.io base URL
- README.md: Add MiniMax model documentation and API key setup instructions
- tests/: Add 31 unit tests and 5 integration tests for MiniMax provider
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant