Ring is a reasoning Mixture-of-Experts (MoE) large language model (LLM) developed by inclusionAI. It is built from or derived from Ling. Its design emphasizes reasoning, efficiency, and modular expert activation. In its “flash” variant (Ring-flash-2.0), it optimizes inference by activating only a subset of experts. It applies reinforcement learning/reasoning optimization techniques. Its architectures and training approaches are tuned to enable efficient and capable reasoning performance. Reasoning-optimized model with reinforcement learning enhancements. Efficient architecture and memory design for large-scale reasoning. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.

Features

  • Mixture-of-Experts (MoE) architecture (activates a subset of experts per input)
  • Reasoning-optimized model with reinforcement learning enhancements
  • “Thinking” model variant (flash) with sparse expert activation (e.g. 1/32 expert activation)
  • High inference throughput (e.g. > 200 tokens/sec under optimized settings)
  • Multi-stage training: SFT + RLVR + RLHF
  • Efficient architecture and memory design for large-scale reasoning

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Ring

Ring Web Site

Other Useful Business Software
Earn up to 16% annual interest with Nexo. Icon
Earn up to 16% annual interest with Nexo.

Let your crypto work for you

Put idle assets to work with competitive interest rates, borrow without selling, and trade with precision. All in one platform. Geographic restrictions, eligibility, and terms apply.
Get started with Nexo.
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Ring!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Large Language Models (LLM), Python AI Models

Registered

2025-09-29