Visual Causal Flow
LTX-Video Support for ComfyUI
Moonshot's most powerful AI model
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
A state-of-the-art open visual language model
Code for running inference and finetuning with SAM 3 model
GLM-Image: Auto-regressive for Dense-knowledge and High-fidelity Image
Qwen3-VL, the multimodal large language model series by Alibaba Cloud
Official Python inference and LoRA trainer package
Tiny vision language model
Recovering the Visual Space from Any Views
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Unified Multimodal Understanding and Generation Models
This repository contains the official implementation of FastVLM
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Official implementation of Watermark Anything with Localized Messages
Wan2.1: Open and Advanced Large-Scale Video Generative Model
VMZ: Model Zoo for Video Modeling
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Python inference and LoRA trainer package for the LTX-2 audio–video
Video Object and Interaction Deletion
Multimodal Diffusion with Representation Alignment
Generating Immersive, Explorable, and Interactive 3D Worlds
Reference PyTorch implementation and models for DINOv3
Qwen3-omni is a natively end-to-end, omni-modal LLM