Multimodal AI Story Teller, built with Stable Diffusion, GPT, etc.
CLIP + FFT/DWT/RGB = text to image/video
DCVGAN: Depth Conditional Video Generation, ICIP 2019.
Implementation of Phenaki Video, which uses Mask GIT
Implementation of Recurrent Interface Network (RIN)
Implementation of Video Diffusion Models
Motion-controllable Video Generation via Latent Trajectory Guidance
End-to-end pipeline converting generative videos
Overcoming Data Limitations for High-Quality Video Diffusion Models
Visual AI Workflow Builder
A Customizable Image-to-Video Model based on HunyuanVideo
Multimodal-Driven Architecture for Customized Video Generation
Implementation of NWT, audio-to-video generation, in Pytorch
Implementation of NÜWA, attention network for text to video synthesis