Agentic, Reasoning, and Coding (ARC) foundation models
Advanced language and coding AI model
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Accurate × Fast × Comprehensive
GLM-4 series: Open Multilingual Multimodal Chat LMs
Controllable & emotion-expressive zero-shot TTS
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4-Voice | End-to-End Chinese-English Conversational Model
A series of math-specific large language models of our Qwen2 series
Capable of understanding text, audio, vision, video
Qwen2.5-VL is the multimodal large language model series
The most powerful local music generation model
Robust Speech Recognition Across Languages, Dialects
Real-time voice interactive digital human
A Systematic Framework for Interactive World Modeling
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System
ChatGLM-6B: An Open Bilingual Dialogue Language Model
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Committed to building an open, public welfare
An open sourced end-to-end VLM-based GUI Agent
Unleashing 10,000+ Word Generation from Long Context LLMs
Open Multilingual Multimodal Chat LMs