GLM-4 series: Open Multilingual Multimodal Chat LMs
GLM-4-Voice | End-to-End Chinese-English Conversational Model
Agentic, Reasoning, and Coding (ARC) foundation models
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Advanced language and coding AI model
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Official inference repo for FLUX.2 models
Code for running inference and finetuning with SAM 3 model
Open-source, high-performance AI model with advanced reasoning
Open-source multi-speaker long-form text-to-speech model
A state-of-the-art open visual language model
Tooling for the Common Objects In 3D dataset
Collection of Gemma 3 variants that are trained for performance
Large-language-model & vision-language-model based on Linear Attention
Open Multilingual Multimodal Chat LMs
Capable of understanding text, audio, vision, video
AI Suite for upscaling, interpolating & restoring images/videos
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
JetBrains’ 4B parameter code model for completions