Official Python inference and LoRA trainer package
World's first open-source, agentic video production system
Generate short videos with one click using AI LLM
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Python inference and LoRA trainer package for the LTX-2 audio–video
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Official repository for LTX-Video
AI Fully Automated Short Video Engine
LTX-Video Support for ComfyUI
Text and image to video generation: CogVideoX and CogVideo
AI-powered video generation skill for OpenClaw
Open-Sora: Democratizing Efficient Video Production for All
Director, Screenwriter, Producer, and Video Generator All-in-One
A Customizable Image-to-Video Model based on HunyuanVideo
RGBD video generation model conditioned on camera input
Implementation of Video Diffusion Models
Multimodal-Driven Architecture for Customized Video Generation
Implementation of Phenaki Video, which uses Mask GIT
Implementation of Make-A-Video, new SOTA text to video generator
AI-powered video clipping and highlight generation
Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model
A python tool that uses GPT-4, FFmpeg, and OpenCV
Large Multimodal Models for Video Understanding and Editing
End-to-end pipeline converting generative videos
Motion-controllable Video Generation via Latent Trajectory Guidance