MiniMax M2

MiniMax M2

MiniMax
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • GW Apps
    37 Ratings
    Visit Website
  • Talkdesk
    3,318 Ratings
    Visit Website
  • TrustInSoft Analyzer
    6 Ratings
    Visit Website
  • The Asset Guardian EAM (TAG)
    22 Ratings
    Visit Website
  • CallTools
    492 Ratings
    Visit Website
  • Vibe Retail
    11 Ratings
    Visit Website
  • ZeroPath
    2 Ratings
    Visit Website

About

GLM-4.7 Flash is a lightweight variant of GLM-4.7, Z.ai’s flagship large language model designed for advanced coding, reasoning, and multi-step task execution with strong agentic performance and a very large context window. It is an MoE-based model optimized for efficient inference that balances performance and resource use, enabling deployment on local machines with moderate memory requirements while maintaining deep reasoning, coding, and agentic task abilities. GLM-4.7 itself advances over earlier generations with enhanced programming capabilities, stable multi-step reasoning, context preservation across turns, and improved tool-calling workflows, and supports very long context lengths (up to ~200 K tokens) for complex tasks that span large inputs or outputs. The Flash variant retains many of these strengths in a smaller footprint, offering competitive benchmark performance in coding and reasoning tasks for models in its size class.

About

MiniMax M2 is an open source foundation model built specifically for agentic applications and coding workflows, striking a new balance of performance, speed, and cost. It excels in end-to-end development scenarios, handling programming, tool-calling, and complex, long-chain workflows with capabilities such as Python integration, while delivering inference speeds of around 100 tokens per second and offering API pricing at just ~8% of the cost of comparable proprietary models. The model supports “Lightning Mode” for high-speed, lightweight agent tasks, and “Pro Mode” for in-depth full-stack development, report generation, and web-based tool orchestration; its weights are fully open source and available for local deployment with vLLM or SGLang. MiniMax M2 positions itself as a production-ready model that enables agents to complete independent tasks, such as data analysis, programming, tool orchestration, and large-scale multi-step logic at real organizational scale.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers, AI engineers, and researchers seeking a large language model that can be deployed locally or via API with strong coding, reasoning, and tool-use capabilities

Audience

Software engineering teams, AI practitioners and developer-led organizations requiring a tool offering a model optimized for agent workflows and full-stack coding tasks

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

$0.30 per million input tokens
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Z.ai
Founded: 2019
China
docs.z.ai/guides/llm/glm-4.7#glm-4-7-flash

Company Information

MiniMax
Founded: 2021
Singapore
www.minimax.io/news/minimax-m2

Alternatives

Alternatives

Devstral 2

Devstral 2

Mistral AI
Devstral Small 2

Devstral Small 2

Mistral AI
MiMo-V2-Flash

MiMo-V2-Flash

Xiaomi Technology
MiniMax M1

MiniMax M1

MiniMax
Qwen3-Max

Qwen3-Max

Alibaba
MiniMax

MiniMax

MiniMax AI

Categories

Categories

Integrations

Claude Code
Cline
DeepSeek
Kilo Code
NVIDIA DRIVE
Okara
OpenAI
Python
Zo

Integrations

Claude Code
Cline
DeepSeek
Kilo Code
NVIDIA DRIVE
Okara
OpenAI
Python
Zo
Claim GLM-4.7-Flash and update features and information
Claim GLM-4.7-Flash and update features and information
Claim MiniMax M2 and update features and information
Claim MiniMax M2 and update features and information