Ling-V2 is an open-source family of Mixture-of-Experts (MoE) large language models developed by the InclusionAI research organization with the goal of combining state-of-the-art performance, efficiency, and openness for next-generation AI applications. It introduces highly sparse architectures where only a fraction of the model’s parameters are activated per input token, enabling models like Ling-mini-2.0 to achieve reasoning and instruction-following capabilities on par with much larger dense models while remaining significantly more computationally efficient. Trained on more than 20 trillion tokens of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-V2’s models demonstrate strong general reasoning, mathematical problem-solving, coding understanding, and knowledge-intensive task performance.

Features

  • Mixture-of-Experts (MoE) architecture for sparse activation efficiency
  • Trained on 20 trillion+ high-quality tokens for broad capability
  • Strong general reasoning and instruction-following performance
  • Efficient mixed-precision (FP8) training and inference support
  • Competitive with larger dense models at lower compute cost
  • Open-source MIT-licensed foundation model and tooling

Project Samples

Project Activity

See All Activity >

Categories

AI Models

License

MIT License

Follow Ling-V2

Ling-V2 Web Site

Other Useful Business Software
8 Monitoring Tools in One APM. Install in 5 Minutes. Icon
8 Monitoring Tools in One APM. Install in 5 Minutes.

Errors, performance, logs, uptime, hosts, anomalies, dashboards, and check-ins. One interface.

AppSignal works out of the box for Ruby, Elixir, Node.js, Python, and more. 30-day free trial, no credit card required.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Ling-V2!

Additional Project Details

Programming Language

Python

Related Categories

Python AI Models

Registered

2026-02-12