GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
State of the art LLM and coding model
LLM-based Reinforcement Learning audio edit model
Open-weight, large-scale hybrid-attention reasoning model
Bidirectional token-classification model for identifiable info
Revolutionizing Database Interactions with Private LLM Technology
Open-source large language model family from Tencent Hunyuan
Phi-3.5 for Mac: Locally-run Vision and Language Models
Tooling for the Common Objects In 3D dataset
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Multimodal embedding and reranking models built on Qwen3-VL
Z80-μLM is a 2-bit quantized language model
Official implementation of Watermark Anything with Localized Messages
General-purpose image editing model that delivers high-fidelity
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
Multimodal model achieving SOTA performance
FAIR Sequence Modeling Toolkit 2
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
Language modeling in a sentence representation space
Multi-modal large language model designed for audio understanding
Release for Improved Denoising Diffusion Probabilistic Models
Official DeiT repository
Python example app from the OpenAI API quickstart tutorial