Skip to content

feat: add MiniMax as first-class LLM provider#421

Open
octo-patch wants to merge 1 commit intoaliasrobotics:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#421
octo-patch wants to merge 1 commit intoaliasrobotics:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

This PR adds MiniMax as a built-in LLM provider for CAI, enabling users to leverage MiniMax's large language models (MiniMax-M2.7, MiniMax-M2.5, MiniMax-M2.5-highspeed) via an OpenAI-compatible API.

Changes

  • MiniMaxProvider (src/cai/sdk/agents/models/minimax_provider.py): New ModelProvider implementation that creates OpenAIChatCompletionsModel instances configured for MiniMax's API. Reads MINIMAX_API_KEY from environment by default, supports custom API key, base URL, or pre-configured AsyncOpenAI client.
  • MiniMax model routing (src/cai/sdk/agents/models/openai_chatcompletions.py): Added MiniMax detection in _fetch_response to correctly route requests through litellm with proper api_base and custom_llm_provider settings.
  • Exports (src/cai/sdk/agents/__init__.py): MiniMaxProvider is exported from the agents package.
  • Documentation (docs/providers/minimax.md): Setup guide with three usage patterns (MiniMaxProvider, environment variables with LiteLLM, direct model on Agent).
  • Example (examples/model_providers/minimax_example.py): Working example demonstrating MiniMax-M2.7 and MiniMax-M2.5-highspeed usage.
  • README: Added MiniMax to the multi-model support provider list.

Tests

  • 18 unit tests (tests/core/test_minimax_provider.py): Cover initialization, API key handling, env var detection, model resolution, lazy client loading, ModelProvider interface compliance, and Runner integration.
  • 3 integration tests (tests/integration/test_minimax_integration.py): Validate real API calls to MiniMax-M2.7, MiniMax-M2.5, and MiniMax-M2.5-highspeed (require MINIMAX_API_KEY).

Available Models

Model Context Window Description
MiniMax-M2.7 1M tokens Latest and most capable model
MiniMax-M2.5 1M tokens Strong general-purpose model
MiniMax-M2.5-highspeed 204K tokens Optimized for speed

Usage

from cai.sdk.agents import Agent, Runner, RunConfig
from cai.sdk.agents.models.minimax_provider import MiniMaxProvider

provider = MiniMaxProvider()  # reads MINIMAX_API_KEY from env
agent = Agent(name="assistant", instructions="You are helpful.", model="MiniMax-M2.7")
result = await Runner.run(agent, "Hello!", run_config=RunConfig(model_provider=provider))

Test Plan

  • 18 unit tests pass
  • 3 integration tests pass with live MiniMax API
  • Existing tests unaffected

Add MiniMaxProvider with built-in support for MiniMax's OpenAI-compatible
API (MiniMax-M2.7, MiniMax-M2.5, MiniMax-M2.5-highspeed models).

Changes:
- Add MiniMaxProvider class implementing ModelProvider interface
- Add MiniMax model routing in OpenAIChatCompletionsModel for litellm
- Export MiniMaxProvider from agents __init__.py
- Add provider documentation (docs/providers/minimax.md)
- Add usage example (examples/model_providers/minimax_example.py)
- Add MiniMax to README provider list
- Add 18 unit tests and 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant