Wan2GP is an open source AI video generation toolkit designed to make modern generative models accessible on consumer-grade hardware with limited GPU memory. It acts as a unified interface for running multiple video, image, and audio generation models, including Wan-based models as well as other systems like Hunyuan Video, Flux, and Qwen. A key focus of the project is reducing VRAM requirements, enabling some workflows to run on as little as 6 GB while still supporting older Nvidia and certain AMD GPUs. Wan2GP provides a full web-based interface that simplifies interaction with complex generative pipelines, making it easier to configure prompts, models, and rendering settings. It also integrates a wide range of utilities such as prompt enhancement, mask editing, motion design, and extraction tools for pose, depth, and flow data to support advanced video workflows.
Features
- Low VRAM support with some models running on ~6 GB GPUs
- Web-based interface for managing prompts, models, and outputs
- Support for multiple generative models including video, image, and audio
- Built-in tools like mask editor, prompt enhancer, and motion designer
- Plugin system with tools such as upscalers and model managers
- Task queuing and headless mode for batch generation workflows