HY-Motion 1.0 is an open-source, large-scale AI model suite developed by Tencent’s Hunyuan team that generates high-quality 3D human motion from simple text prompts, enabling the automatic production of fluid, diverse, and semantically accurate animations without manual keyframing or rigging. Built on advanced deep learning architectures that combine Diffusion Transformer (DiT) and flow matching techniques, HY-Motion scales these approaches to the billion-parameter level, resulting in strong instruction-following capabilities and richer motion outputs compared to existing open-source models. The training strategy for the HY-Motion series includes extensive pre-training on thousands of hours of varied motion data, fine-tuning on curated high-quality datasets, and reinforcement learning with human feedback, which improves both the plausibility and adaptability of generated motion sequences.
Features
- Text-to-3D human motion synthesis using advanced AI
- Billion-parameter diffusion transformer models
- Three-stage training: large-scale pre-training, fine-tuning, RL with human feedback
- Skeleton-based output suitable for animation pipelines
- Local inference scripts and model checkpoints for developers
- Compatibility with 3D frameworks and integration tools