Ling-V2 is an open-source family of Mixture-of-Experts (MoE) large language models developed by the InclusionAI research organization with the goal of combining state-of-the-art performance, efficiency, and openness for next-generation AI applications. It introduces highly sparse architectures where only a fraction of the model’s parameters are activated per input token, enabling models like Ling-mini-2.0 to achieve reasoning and instruction-following capabilities on par with much larger dense models while remaining significantly more computationally efficient. Trained on more than 20 trillion tokens of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-V2’s models demonstrate strong general reasoning, mathematical problem-solving, coding understanding, and knowledge-intensive task performance.
Features
- Mixture-of-Experts (MoE) architecture for sparse activation efficiency
- Trained on 20 trillion+ high-quality tokens for broad capability
- Strong general reasoning and instruction-following performance
- Efficient mixed-precision (FP8) training and inference support
- Competitive with larger dense models at lower compute cost
- Open-source MIT-licensed foundation model and tooling