LFM2

LFM2

Liquid AI
Ministral 3B

Ministral 3B

Mistral AI
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Gemini
    1,037,445 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Claude
    38,813 Ratings
    Visit Website
  • Google Cloud Speech-to-Text
    373 Ratings
    Visit Website
  • Iru
    1,457 Ratings
    Visit Website
  • ManageEngine Endpoint Central
    2,458 Ratings
    Visit Website
  • DriveStrike
    23 Ratings
    Visit Website
  • ConnectWise Automate
    505 Ratings
    Visit Website

About

LFM2 is a next-generation series of on-device foundation models built to deliver the fastest generative-AI experience across a wide range of endpoints. It employs a new hybrid architecture that achieves up to 2x faster decode and prefill performance than comparable models, and up to 3x improvements in training efficiency compared to the previous generation. These models strike an optimal balance of quality, latency, and memory for deployment on embedded systems, allowing real-time, on-device AI across smartphones, laptops, vehicles, wearables, and other endpoints, enabling millisecond inference, device resilience, and full data sovereignty. Available in three dense checkpoints (0.35 B, 0.7 B, and 1.2 B parameters), LFM2 demonstrates benchmark performance that outperforms similarly sized models in tasks such as knowledge recall, mathematics, multilingual instruction-following, and conversational dialogue evaluations.

About

Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers and engineering teams needing a solution offering foundation models without reliance on cloud infrastructure

Audience

Developers and organizations seeking an AI model for on-device applications

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

Free
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Liquid AI
Founded: 2023
United States
www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models

Company Information

Mistral AI
Founded: 2023
France
mistral.ai/news/ministraux/

Alternatives

Ministral 8B

Ministral 8B

Mistral AI

Alternatives

Ministral 8B

Ministral 8B

Mistral AI
Mistral Large

Mistral Large

Mistral AI
Ministral 3B

Ministral 3B

Mistral AI
Ai2 OLMoE

Ai2 OLMoE

The Allen Institute for Artificial Intelligence
Mistral Large 3

Mistral Large 3

Mistral AI
Gemma 2

Gemma 2

Google
Mistral NeMo

Mistral NeMo

Mistral AI

Categories

Categories

Integrations

AI-FLOW
APIPark
AnythingLLM
Continue
EvalsOne
Groq
Hugging Face
Kiin
LibreChat
Lunary
Melies
MindMac
Noma
OpenAI
Prompt Security
PromptPal
Simplismart
SydeLabs
Unify AI
Wordware

Integrations

AI-FLOW
APIPark
AnythingLLM
Continue
EvalsOne
Groq
Hugging Face
Kiin
LibreChat
Lunary
Melies
MindMac
Noma
OpenAI
Prompt Security
PromptPal
Simplismart
SydeLabs
Unify AI
Wordware
Claim LFM2 and update features and information
Claim LFM2 and update features and information
Claim Ministral 3B and update features and information
Claim Ministral 3B and update features and information