ChatGLM-6B: An Open Bilingual Dialogue Language Model
GLM-4-Voice | End-to-End Chinese-English Conversational Model
DeepSeek Coder: Let the Code Write Itself
Chat & pretrained large vision language model
Foundation model for image generation
A series of math-specific large language models of our Qwen2 series
State-of-the-art (SoTA) text-to-video pre-trained model
Chinese and English multimodal conversational language model
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Diffusion Transformer with Fine-Grained Chinese Understanding
Qwen2.5-VL is the multimodal large language model series
Open-source industrial-grade ASR models
GPT4V-level open-source multi-modal model based on Llama3-8B
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Open-source, high-performance Mixture-of-Experts large language model
StudioOllamaUI is a local, portable interface for Ollama
Qwen2.5-Coder is the code version of Qwen2.5, the large language model
AI Suite for upscaling, interpolating & restoring images/videos
Open Multilingual Multimodal Chat LMs
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)
Dia-1.6B generates lifelike English dialogue and vocal expressions