MiniMax-M1
Open-weight, large-scale hybrid-attention reasoning model
... for very long sequences. Trained using large-scale reinforcement learning on diverse tasks, it excels in mathematics, software engineering, agentic tool use, and long-context understanding benchmarks. It outperforms other open-weight models like DeepSeek R1 and Qwen3-235B on complex reasoning and coding challenges. MiniMax-M1 is available in two versions with 40K and 80K token thinking budgets, offering scalable performance based on your application needs.