GLM-4 series: Open Multilingual Multimodal Chat LMs
Z80-μLM is a 2-bit quantized language model
Official implementation of Watermark Anything with Localized Messages
LLM-based Reinforcement Learning audio edit model
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
FAIR Sequence Modeling Toolkit 2
Bidirectional token-classification model for identifiable info
Open-source large language model family from Tencent Hunyuan
Genome modeling and design across all domains of life
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Multimodal embedding and reranking models built on Qwen3-VL
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Open-weight, large-scale hybrid-attention reasoning model
Tooling for the Common Objects In 3D dataset
General-purpose image editing model that delivers high-fidelity
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
Language modeling in a sentence representation space
Designed for text embedding and ranking tasks
Multi-modal large language model designed for audio understanding
Release for Improved Denoising Diffusion Probabilistic Models
Official DeiT repository
Open Multilingual Multimodal Chat LMs
Code for the paper Hybrid Spectrogram and Waveform Source Separation