Official repository for LTX-Video
State-of-the-art (SoTA) text-to-video pre-trained model
Wan2.2: Open and Advanced Large-Scale Video Generative Model
VMZ: Model Zoo for Video Modeling
Wan2.1: Open and Advanced Large-Scale Video Generative Model
RGBD video generation model conditioned on camera input
LTX-Video Support for ComfyUI
Video understanding codebase from FAIR for reproducing video models
Large Multimodal Models for Video Understanding and Editing
A Customizable Image-to-Video Model based on HunyuanVideo
Multimodal-Driven Architecture for Customized Video Generation
Repo for SeedVR2 & SeedVR
Capable of understanding text, audio, vision, video
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Python inference and LoRA trainer package for the LTX-2 audio–video
Lets make video diffusion practical
GPT4V-level open-source multi-modal model based on Llama3-8B
Multimodal Diffusion with Representation Alignment
Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Code for running inference and finetuning with SAM 3 model
VGGSfM: Visual Geometry Grounded Deep Structure From Motion
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Qwen2.5-VL is the multimodal large language model series
Qwen3-omni is a natively end-to-end, omni-modal LLM