Python inference and LoRA trainer package for the LTX-2 audio–video
Generate short videos with one click using AI LLM
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Wan2.1: Open and Advanced Large-Scale Video Generative Model
A python tool that uses GPT-4, FFmpeg, and OpenCV
Open-Sora: Democratizing Efficient Video Production for All
text and image to video generation: CogVideoX (2024) and CogVideo
AI-powered video clipping and highlight generation
LTX-Video Support for ComfyUI
HunyuanVideo: A Systematic Framework For Large Video Generation Model
Director, Screenwriter, Producer, and Video Generator All-in-One
Official repository for LTX-Video
Implementation of Phenaki Video, which uses Mask GIT
Implementation of Video Diffusion Models
Motion-controllable Video Generation via Latent Trajectory Guidance
End-to-end pipeline converting generative videos
Generate high-definition story short videos with one click using AI
Large Multimodal Models for Video Understanding and Editing
Implementation of Make-A-Video, new SOTA text to video generator
Implementation of Recurrent Interface Network (RIN)
Overcoming Data Limitations for High-Quality Video Diffusion Models
CLIP + FFT/DWT/RGB = text to image/video
Multimodal AI Story Teller, built with Stable Diffusion, GPT, etc.
A walk along memory lane
Implementation of NÜWA, attention network for text to video synthesis