GLM-4.5: Open-source LLM for intelligent agents by Z.ai
A series of math-specific large language models of our Qwen2 series
Capable of understanding text, audio, vision, video
Qwen2.5-VL is the multimodal large language model series
A Systematic Framework for Interactive World Modeling
Open source large language model by Alibaba
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Wan2.2: Open and Advanced Large-Scale Video Generative Model
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
CodeGeeX2: A More Powerful Multilingual Code Generation Model
State-of-the-art TTS model under 25MB
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
FlashMLA: Efficient Multi-head Latent Attention Kernels
LLM-based Reinforcement Learning audio edit model
FAIR Sequence Modeling Toolkit 2
High-Fidelity and Controllable Generation of Textured 3D Assets
Large Multimodal Models for Video Understanding and Editing
Large-scale xAI model for local inference with SGLang, Grok-2.5
Compact hybrid reasoning language model for intelligent responses
Powerful 14B LLM with strong instruction and long-text handling
BGE-Large v1.5: High-accuracy English embedding model for retrieval
Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video
Compact English sentence embedding model for semantic search tasks
Multimodal 7B model for image, video, and text understanding tasks
Efficient English embedding model for semantic search and retrieval