LiveAvatar is an open-source research and implementation project that provides a unified framework for real-time, streaming, interactive avatar video generation driven by audio and other control signals. It implements techniques from state-of-the-art diffusion-based avatar modeling to support infinite-length continuous video generation with low latency, enabling interactive AI avatars that maintain continuity and realism over extended sessions. The project co-designs algorithms and system optimizations, such as block-wise autoregressive processing and fast sampling strategies, to deliver real-time frame rates (e.g., ~45 FPS on appropriate GPU clusters) while handling non-stop generation without quality degradation. LiveAvatar focuses on delivering not just high-quality visuals but also the responsiveness necessary for immersive conversational experiences, making it suitable for advanced AI agents, virtual assistants, and interactive streaming contexts.
Features
- Real-time streaming avatar video generation
- Infinite-length continuous output
- Audio-driven motion and expression control
- Block-wise autoregressive inference pipeline
- High-performance on GPU clusters
- Designed for interactive AI use cases