minimax-m1 by openrouter - AI Model Details, Pricing, and Performance Metrics

minimax
minimax-m1
minimax

minimax-m1

completions
byopenrouter

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

Released
Jun 17, 2025
Knowledge
Dec 19, 2024
Context
1M
Input
$0.42 / 1M tokens
Output
$1.93 / 1M tokens
Accepts: text
Returns: text

Access minimax-m1 through LangDB AI Gateway

Recommended

Integrate with minimax's minimax-m1 and 250+ other models through a unified API. Monitor usage, control costs, and enhance security.

Unified API
Cost Optimization
Enterprise Security
Get Started Now

Free tier available • No credit card required

Instant Setup
99.9% Uptime
10,000+Monthly Requests

Category Scores

Benchmark Tests

View Other Benchmarks
AIME
81.3
Mathematics
AA Coding Index
51.8
Programming
AAII
41.6
General
AA Math Index
13.7
Mathematics
GPQA
68.2
STEM (Physics, Chemistry, Biology)
HLE
7.5
General Knowledge
LiveCodeBench
65.7
Programming
MATH-500
97.2
Mathematics
MMLU-Pro
80.8
General Knowledge
SciCode
37.8
Scientific

Code Examples

Integration samples and API usage