MiniMax: MiniMax M1

Minimax-M1

MiniMax M1 is a large-scale open-weight reasoning model built for long-context processing and efficient inference. Using a hybrid Mixture-of-Experts (MoE) design combined with a custom “lightning attention” mechanism, it can handle sequences up to 1 million tokens while maintaining strong FLOP efficiency. With 456B total parameters and 45.9B active per token, it is optimized for complex, multi-step reasoning.

Trained with a custom reinforcement learning pipeline (CISPO), MiniMax-M1 delivers exceptional performance in long-context comprehension, software engineering, agent-driven tool use, and mathematical reasoning. It achieves top results across benchmarks such as FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench—often surpassing other open models like DeepSeek R1 and Qwen3-235B.

Conversations

Download TXT
Download PDF
CreatorMiniMax
Release DateJune, 2025
LicenseApache 2.0
Context Window1,000,000
Image Input SupportNo
Open Source (Weights)Yes
Parameters456B, 45.9B active at inference time
Model WeightsClick here

Performance Benchmarks

CategoryTaskMiniMax-M1-80KMiniMax-M1-40KQwen3-235B-A22BDeepSeek-R1-0528DeepSeek-R1Seed-Thinking-v1.5Claude 4 OpusGemini 2.5 Pro (06-05)OpenAI-o3
Extended Thinking80K40K32k64k32k32k64k64k100k
MathematicsAIME 202486.083.385.791.479.886.776.092.091.6
AIME 202576.974.681.587.570.074.075.588.088.9
MATH-50096.896.096.298.097.396.798.298.898.1
General CodingLiveCodeBench (24/8~25/5)65.062.365.973.155.967.556.677.175.8
FullStackBench68.367.662.969.470.169.970.369.3
Reasoning & KnowledgeGPQA Diamond70.069.271.181.071.577.379.686.483.3
HLE (no tools)8.4*7.2*7.6*17.7*8.6*8.210.721.620.3
ZebraLogic86.880.180.395.178.784.495.191.695.8
MMLU-Pro81.180.683.085.084.087.085.086.085.0
Software EngineeringSWE-bench Verified56.055.634.457.649.247.072.567.269.1
Long ContextOpenAI-MRCR (128k)73.476.127.751.535.854.348.976.856.5
OpenAI-MRCR (1M)56.258.658.8
LongBench-v261.561.050.152.158.352.555.665.058.8
Agentic Tool UseTAU-bench (airline)62.060.034.753.544.059.650.052.0
TAU-bench (retail)63.567.858.663.955.781.467.073.9
FactualitySimpleQA18.517.911.027.830.112.954.049.4
General AssistantMultiChallenge44.744.740.045.040.743.045.851.856.5

Explore More AI Models