LFM2

LFM2

Liquid AI
Mu

Mu

Microsoft
+
+

Related Products

  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Google Cloud Speech-to-Text
    373 Ratings
    Visit Website
  • Iru
    1,457 Ratings
    Visit Website
  • ManageEngine Endpoint Central
    2,458 Ratings
    Visit Website
  • DriveStrike
    23 Ratings
    Visit Website
  • ConnectWise Automate
    505 Ratings
    Visit Website
  • Bitdefender Ultimate Small Business Security
    3 Ratings
    Visit Website
  • Qloo
    23 Ratings
    Visit Website

About

LFM2 is a next-generation series of on-device foundation models built to deliver the fastest generative-AI experience across a wide range of endpoints. It employs a new hybrid architecture that achieves up to 2x faster decode and prefill performance than comparable models, and up to 3x improvements in training efficiency compared to the previous generation. These models strike an optimal balance of quality, latency, and memory for deployment on embedded systems, allowing real-time, on-device AI across smartphones, laptops, vehicles, wearables, and other endpoints, enabling millisecond inference, device resilience, and full data sovereignty. Available in three dense checkpoints (0.35 B, 0.7 B, and 1.2 B parameters), LFM2 demonstrates benchmark performance that outperforms similarly sized models in tasks such as knowledge recall, mathematics, multilingual instruction-following, and conversational dialogue evaluations.

About

Mu is a 330-million-parameter encoder–decoder language model designed to power the agent in Windows settings by mapping natural-language queries to Settings function calls, running fully on-device via NPUs at over 100 tokens per second while maintaining high accuracy. Drawing on Phi Silica optimizations, Mu’s encoder–decoder architecture reuses a fixed-length latent representation to cut computation and memory overhead, yielding 47 percent lower first-token latency and 4.7× higher decoding speed on Qualcomm Hexagon NPUs compared to similar decoder-only models. Hardware-aware tuning, including a 2/3–1/3 encoder–decoder parameter split, weight sharing between input and output embeddings, Dual LayerNorm, rotary positional embeddings, and grouped-query attention, enables fast inference at over 200 tokens per second on devices like Surface Laptop 7 and sub-500 ms response times for settings queries.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers and engineering teams needing a solution offering foundation models without reliance on cloud infrastructure

Audience

Developers seeking a solution to navigate and configure system settings through natural language

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Liquid AI
Founded: 2023
United States
www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models

Company Information

Microsoft
Founded: 1975
United States
blogs.windows.com/windowsexperience/2025/06/23/introducing-mu-language-model-and-how-it-enabled-the-agent-in-windows-settings/

Alternatives

Ministral 8B

Ministral 8B

Mistral AI

Alternatives

Yi-Large

Yi-Large

01.AI
Pixtral Large

Pixtral Large

Mistral AI
Ministral 3B

Ministral 3B

Mistral AI
Falcon-7B

Falcon-7B

Technology Innovation Institute (TII)
Ai2 OLMoE

Ai2 OLMoE

The Allen Institute for Artificial Intelligence
CodeQwen

CodeQwen

Alibaba
Gemma 2

Gemma 2

Google

Categories

Categories

Integrations

Hugging Face
OpenAI
OpenRouter
Qwen3

Integrations

Hugging Face
OpenAI
OpenRouter
Qwen3
Claim LFM2 and update features and information
Claim LFM2 and update features and information
Claim Mu and update features and information
Claim Mu and update features and information