Official repository for LTX-Video
State-of-the-art (SoTA) text-to-video pre-trained model
RGBD video generation model conditioned on camera input
LTX-Video Support for ComfyUI
VMZ: Model Zoo for Video Modeling
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Python inference and LoRA trainer package for the LTX-2 audio–video
Lets make video diffusion practical
GPT4V-level open-source multi-modal model based on Llama3-8B
Video understanding codebase from FAIR for reproducing video models
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Qwen3-VL, the multimodal large language model series by Alibaba Cloud
Large Multimodal Models for Video Understanding and Editing
A Customizable Image-to-Video Model based on HunyuanVideo
Multimodal-Driven Architecture for Customized Video Generation
OCR expert VLM powered by Hunyuan's native multimodal architecture
Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model
Capable of understanding text, audio, vision, video
Repo for SeedVR2 & SeedVR
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Multimodal Diffusion with Representation Alignment
Agentic, Reasoning, and Coding (ARC) foundation models
The Clay Foundation Model - An open source AI model and interface
VGGSfM: Visual Geometry Grounded Deep Structure From Motion