Official Python inference and LoRA trainer package
Generate short videos with one click using AI LLM
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Open-Sora: Democratizing Efficient Video Production for All
HunyuanVideo: A Systematic Framework For Large Video Generation Model
Generate high-definition story short videos with one click using AI
AI-powered video generation skill for OpenClaw
RGBD video generation model conditioned on camera input
text and image to video generation: CogVideoX (2024) and CogVideo
A Customizable Image-to-Video Model based on HunyuanVideo
Implementation of Video Diffusion Models
AI-powered video clipping and highlight generation
End-to-end pipeline converting generative videos
A python tool that uses GPT-4, FFmpeg, and OpenCV
Motion-controllable Video Generation via Latent Trajectory Guidance
Multimodal-Driven Architecture for Customized Video Generation
Implementation of Recurrent Interface Network (RIN)
Overcoming Data Limitations for High-Quality Video Diffusion Models
A Customizable Image-to-Video Model based on HunyuanVideo
CLIP + FFT/DWT/RGB = text to image/video
Implementation of NÜWA, attention network for text to video synthesis
Implementation of NWT, audio-to-video generation, in Pytorch
DCVGAN: Depth Conditional Video Generation, ICIP 2019.