Training Large Language Model to Reason in a Continuous Latent Space
Fast, Sharp & Reliable Agentic Intelligence
Driving with Graph Visual Question Answering
A tension reasoning engine over 131 S-class problems
Qwen3-VL, the multimodal large language model series by Alibaba Cloud
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
Kimi K2 is the large language model series developed by Moonshot AI
gpt-oss-120b and gpt-oss-20b are two open-weight language models
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Vertically Unified Agents for Graph Retrieval-Augmented Reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Official Repo for ICML 2024 paper
Achieving 3+ generation speedup on reasoning tasks
MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation
MobileLLM Optimizing Sub-billion Parameter Language Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Scaling Reinforcement Learning with LLMs
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models
Ultra-Efficient LLMs on End Device
LongBench v2 and LongBench (ACL 25'&24')
Multimodal model achieving SOTA performance
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Qwen3.5 is the large language model series developed by Qwen team