LFM2

LFM2

Liquid AI
MiniMax M1

MiniMax M1

MiniMax
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Google Cloud Speech-to-Text
    373 Ratings
    Visit Website
  • Iru
    1,457 Ratings
    Visit Website
  • ManageEngine Endpoint Central
    2,458 Ratings
    Visit Website
  • DriveStrike
    23 Ratings
    Visit Website
  • ConnectWise Automate
    505 Ratings
    Visit Website
  • Bitdefender Ultimate Small Business Security
    3 Ratings
    Visit Website
  • Syncro
    502 Ratings
    Visit Website

About

LFM2 is a next-generation series of on-device foundation models built to deliver the fastest generative-AI experience across a wide range of endpoints. It employs a new hybrid architecture that achieves up to 2x faster decode and prefill performance than comparable models, and up to 3x improvements in training efficiency compared to the previous generation. These models strike an optimal balance of quality, latency, and memory for deployment on embedded systems, allowing real-time, on-device AI across smartphones, laptops, vehicles, wearables, and other endpoints, enabling millisecond inference, device resilience, and full data sovereignty. Available in three dense checkpoints (0.35 B, 0.7 B, and 1.2 B parameters), LFM2 demonstrates benchmark performance that outperforms similarly sized models in tasks such as knowledge recall, mathematics, multilingual instruction-following, and conversational dialogue evaluations.

About

MiniMax‑M1 is a large‑scale hybrid‑attention reasoning model released by MiniMax AI under the Apache 2.0 license. It supports an unprecedented 1 million‑token context window and up to 80,000-token outputs, enabling extended reasoning across long documents. Trained using large‑scale reinforcement learning with a novel CISPO algorithm, MiniMax‑M1 completed full training on 512 H800 GPUs in about three weeks. It achieves state‑of‑the‑art performance on benchmarks in mathematics, coding, software engineering, tool usage, and long‑context understanding, matching or outperforming leading models. Two model variants are available (40K and 80K thinking budgets), with weights and deployment scripts provided via GitHub and Hugging Face.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers and engineering teams needing a solution offering foundation models without reliance on cloud infrastructure

Audience

AI researchers, developers, and enterprises needing a solution providing LLM capable of long‑context reasoning, efficient compute, and integration via function calls

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Liquid AI
Founded: 2023
United States
www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models

Company Information

MiniMax
Founded: 2021
Singapore
www.minimax.io/news/minimaxm1

Alternatives

Ministral 8B

Ministral 8B

Mistral AI

Alternatives

MiniMax M2

MiniMax M2

MiniMax
Olmo 3

Olmo 3

Ai2
Ministral 3B

Ministral 3B

Mistral AI
Ai2 OLMoE

Ai2 OLMoE

The Allen Institute for Artificial Intelligence
Gemma 2

Gemma 2

Google
DeepSeek-V3.2

DeepSeek-V3.2

DeepSeek

Categories

Categories

Integrations

Hugging Face
GitHub
OpenAI
OpenRouter
Qwen3
SiliconFlow

Integrations

Hugging Face
GitHub
OpenAI
OpenRouter
Qwen3
SiliconFlow
Claim LFM2 and update features and information
Claim LFM2 and update features and information
Claim MiniMax M1 and update features and information
Claim MiniMax M1 and update features and information