GLM-V is an open-source vision-language model (VLM) series from ZhipuAI that extends the GLM foundation models into multimodal reasoning and perception. The repository provides both GLM-4.5V and GLM-4.1V models, designed to advance beyond basic perception toward higher-level reasoning, long-context understanding, and agent-based applications. GLM-4.5V builds on the flagship GLM-4.5-Air foundation (106B parameters, 12B active), achieving state-of-the-art results on 42 benchmarks across image, video, document, GUI, and grounding tasks. It introduces hybrid training for broad-spectrum reasoning and a Thinking Mode switch to balance speed and depth of reasoning. GLM-4.1V-9B-Thinking incorporates reinforcement learning with curriculum sampling (RLCS) and Chain-of-Thought reasoning, outperforming models much larger in scale (e.g., Qwen-2.5-VL-72B) across many benchmarks.
Features
- Bilingual (Chinese/English) multimodal reasoning and perception
- GLM-4.5V: hybrid-trained flagship with state-of-the-art benchmark scores
- GLM-4.1V-9B-Thinking: reasoning-focused model with RLCS and CoT mechanisms
- Long-context support (up to 64k) and flexible input (images, video, documents)
- GUI agent capabilities with platform-aware prompts and precise grounding
- Thinking Mode switch to toggle between fast and deep reasoning outputs