State-of-the-art Image & Video CLIP, Multimodal Large Language Models
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Agentic, Reasoning, and Coding (ARC) foundation models
Diversity-driven optimization and large-model reasoning ability
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
A series of math-specific large language models of our Qwen2 series
GLM-4 series: Open Multilingual Multimodal Chat LMs
Open-source, high-performance AI model with advanced reasoning
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Open-weight, large-scale hybrid-attention reasoning model
A theoretical reconstruction of the Claude Mythos architecture
Advanced language and coding AI model
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
gpt-oss-120b and gpt-oss-20b are two open-weight language models
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Achieving 3+ generation speedup on reasoning tasks
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
Ultra-Efficient LLMs on End Device
Research code artifacts for Code World Model (CWM)
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Ling is a MoE LLM provided and open-sourced by InclusionAI
Qwen3 is the large language model series developed by Qwen team
Renderer for the harmony response format to be used with gpt-oss