LFM2

LFM2

Liquid AI
Mistral NeMo

Mistral NeMo

Mistral AI
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Google Cloud Speech-to-Text
    373 Ratings
    Visit Website
  • Iru
    1,457 Ratings
    Visit Website
  • ManageEngine Endpoint Central
    2,458 Ratings
    Visit Website
  • DriveStrike
    23 Ratings
    Visit Website
  • ConnectWise Automate
    505 Ratings
    Visit Website
  • Bitdefender Ultimate Small Business Security
    3 Ratings
    Visit Website
  • Syncro
    502 Ratings
    Visit Website

About

LFM2 is a next-generation series of on-device foundation models built to deliver the fastest generative-AI experience across a wide range of endpoints. It employs a new hybrid architecture that achieves up to 2x faster decode and prefill performance than comparable models, and up to 3x improvements in training efficiency compared to the previous generation. These models strike an optimal balance of quality, latency, and memory for deployment on embedded systems, allowing real-time, on-device AI across smartphones, laptops, vehicles, wearables, and other endpoints, enabling millisecond inference, device resilience, and full data sovereignty. Available in three dense checkpoints (0.35 B, 0.7 B, and 1.2 B parameters), LFM2 demonstrates benchmark performance that outperforms similarly sized models in tasks such as knowledge recall, mathematics, multilingual instruction-following, and conversational dialogue evaluations.

About

Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers and engineering teams needing a solution offering foundation models without reliance on cloud infrastructure

Audience

Users looking for a language model tool to power their AI-driven applications

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

Free
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Liquid AI
Founded: 2023
United States
www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models

Company Information

Mistral AI
Founded: 2023
France
mistral.ai/news/mistral-nemo/

Alternatives

Ministral 8B

Ministral 8B

Mistral AI

Alternatives

Jamba

Jamba

AI21 Labs
Mistral Small

Mistral Small

Mistral AI
Ministral 3B

Ministral 3B

Mistral AI
Olmo 2

Olmo 2

Ai2
Ai2 OLMoE

Ai2 OLMoE

The Allen Institute for Artificial Intelligence
Gemma 2

Gemma 2

Google
Mistral 7B

Mistral 7B

Mistral AI

Categories

Categories

Integrations

1min.AI
C
C#
C++
Deep Infra
Elixir
Expanse
Go
HumanLayer
Lewis
Mathstral
Melies
NexalAI
OpenRouter
Qwen3
Ruby
Tune AI
Verta
kluster.ai
thisorthis.ai

Integrations

1min.AI
C
C#
C++
Deep Infra
Elixir
Expanse
Go
HumanLayer
Lewis
Mathstral
Melies
NexalAI
OpenRouter
Qwen3
Ruby
Tune AI
Verta
kluster.ai
thisorthis.ai
Claim LFM2 and update features and information
Claim LFM2 and update features and information
Claim Mistral NeMo and update features and information
Claim Mistral NeMo and update features and information