State-of-the-art Image & Video CLIP, Multimodal Large Language Models
New set of lightweight state-of-the-art, open foundation models
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Agentic, Reasoning, and Coding (ARC) foundation models
Diversity-driven optimization and large-model reasoning ability
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
A series of math-specific large language models of our Qwen2 series
GLM-4 series: Open Multilingual Multimodal Chat LMs
Moonshot's most powerful AI model
Open-source, high-performance AI model with advanced reasoning
From Vibe Coding to Agentic Engineering
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Open-weight, large-scale hybrid-attention reasoning model
A theoretical reconstruction of the Claude Mythos architecture
Advanced language and coding AI model
Fast, Sharp & Reliable Agentic Intelligence
Qwen3-VL, the multimodal large language model series by Alibaba Cloud
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
Kimi K2 is the large language model series developed by Moonshot AI
gpt-oss-120b and gpt-oss-20b are two open-weight language models
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Achieving 3+ generation speedup on reasoning tasks
MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning