Repo of Qwen2-Audio chat & pretrained large audio language model
A series of math-specific large language models of our Qwen2 series
Qwen3-Coder is the code version of Qwen3
Qwen3 is the large language model series developed by Qwen team
Renderer for the harmony response format to be used with gpt-oss
Qwen3-VL, the multimodal large language model series by Alibaba Cloud
Qwen2.5-VL is the multimodal large language model series
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
FAIR Sequence Modeling Toolkit 2
CodeGeeX2: A More Powerful Multilingual Code Generation Model
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Ling is a MoE LLM provided and open-sourced by InclusionAI
Z80-μLM is a 2-bit quantized language model
General-purpose image editing model that delivers high-fidelity
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
Research code artifacts for Code World Model (CWM)
MiniMax M2.1, a SOTA model for real-world dev & agents.
New family of code large language models (LLMs)
Tiny vision language model
MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation
State of the art LLM and coding model
A Family of Open Foundation Models for Code Intelligence
Chat & pretrained large vision language model
MiniMax-M2, a model built for Max coding & agentic workflows
Chat & pretrained large audio language model proposed by Alibaba Cloud