Ministral 3B

Ministral 3B

Mistral AI
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • GW Apps
    37 Ratings
    Visit Website
  • Talkdesk
    3,318 Ratings
    Visit Website
  • TrustInSoft Analyzer
    6 Ratings
    Visit Website
  • The Asset Guardian EAM (TAG)
    22 Ratings
    Visit Website
  • CallTools
    492 Ratings
    Visit Website
  • Vibe Retail
    11 Ratings
    Visit Website
  • ZeroPath
    2 Ratings
    Visit Website

About

GLM-4.7 Flash is a lightweight variant of GLM-4.7, Z.ai’s flagship large language model designed for advanced coding, reasoning, and multi-step task execution with strong agentic performance and a very large context window. It is an MoE-based model optimized for efficient inference that balances performance and resource use, enabling deployment on local machines with moderate memory requirements while maintaining deep reasoning, coding, and agentic task abilities. GLM-4.7 itself advances over earlier generations with enhanced programming capabilities, stable multi-step reasoning, context preservation across turns, and improved tool-calling workflows, and supports very long context lengths (up to ~200 K tokens) for complex tasks that span large inputs or outputs. The Flash variant retains many of these strengths in a smaller footprint, offering competitive benchmark performance in coding and reasoning tasks for models in its size class.

About

Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers, AI engineers, and researchers seeking a large language model that can be deployed locally or via API with strong coding, reasoning, and tool-use capabilities

Audience

Developers and organizations seeking an AI model for on-device applications

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

Free
Free Version
Free Trial

Pricing

Free
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Z.ai
Founded: 2019
China
docs.z.ai/guides/llm/glm-4.7#glm-4-7-flash

Company Information

Mistral AI
Founded: 2023
France
mistral.ai/news/ministraux/

Alternatives

Alternatives

Ministral 8B

Ministral 8B

Mistral AI
Mistral Large

Mistral Large

Mistral AI
MiMo-V2-Flash

MiMo-V2-Flash

Xiaomi Technology
Mistral Large 3

Mistral Large 3

Mistral AI
Qwen3-Max

Qwen3-Max

Alibaba
Mistral NeMo

Mistral NeMo

Mistral AI

Categories

Categories

Integrations

AiAssistWorks
Amazon Bedrock
Arize Phoenix
BlueGPT
Echo AI
Fleak
GMTech
Graydient AI
Groq
Humiris AI
Lewis
Lunary
Mammouth AI
Noma
OpenPipe
PromptPal
Simplismart
Tune AI
Weave
Wordware

Integrations

AiAssistWorks
Amazon Bedrock
Arize Phoenix
BlueGPT
Echo AI
Fleak
GMTech
Graydient AI
Groq
Humiris AI
Lewis
Lunary
Mammouth AI
Noma
OpenPipe
PromptPal
Simplismart
Tune AI
Weave
Wordware
Claim GLM-4.7-Flash and update features and information
Claim GLM-4.7-Flash and update features and information
Claim Ministral 3B and update features and information
Claim Ministral 3B and update features and information