Mixture-of-Experts Vision-Language Models for Advanced Multimodal
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Official Repo for ICML 2024 paper
Achieving 3+ generation speedup on reasoning tasks
MobileLLM Optimizing Sub-billion Parameter Language Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
Ultra-Efficient LLMs on End Device
LongBench v2 and LongBench (ACL 25'&24')
Research code artifacts for Code World Model (CWM)
Tongyi Deep Research, the Leading Open-source Deep Research Agent
An elegent pytorch implement of transformers
Director, Screenwriter, Producer, and Video Generator All-in-One
Build multimodal language agents for fast prototype and production
Ling is a MoE LLM provided and open-sourced by InclusionAI
A simple yet powerful agent framework for personal assistants
A modular Agentic RAG built with LangGraph
Qwen3 is the large language model series developed by Qwen team
Renderer for the harmony response format to be used with gpt-oss
Benchmark LLMs by fighting in Street Fighter 3
Lightweight framework for building Agents with memory, knowledge, etc.
Offical Implementation for "Recursive Multi-Agent Systems"
Pushing the Limits of Mathematical Reasoning in Open Language Models
A lightweight framework for building LLM-based agents