Code for running inference and finetuning with SAM 3 model
LTX-Video Support for ComfyUI
Unified Multimodal Understanding and Generation Models
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
A state-of-the-art open visual language model
Chat & pretrained large vision language model
Tiny vision language model
This repository contains the official implementation of FastVLM
Generating Immersive, Explorable, and Interactive 3D Worlds
VMZ: Model Zoo for Video Modeling
Towards Real-World Vision-Language Understanding
CogView4, CogView3-Plus and CogView3(ECCV 2024)
VGGSfM: Visual Geometry Grounded Deep Structure From Motion
Wan2.1: Open and Advanced Large-Scale Video Generative Model
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Multimodal Diffusion with Representation Alignment
Reference PyTorch implementation and models for DINOv3
Lets make video diffusion practical
Python inference and LoRA trainer package for the LTX-2 audio–video
Official implementation of Watermark Anything with Localized Messages
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Qwen3-omni is a natively end-to-end, omni-modal LLM
Contexts Optical Compression
State-of-the-art Image & Video CLIP, Multimodal Large Language Models