Audio foundation model excelling in audio understanding
Repo of Qwen2-Audio chat & pretrained large audio language model
Open-source framework for intelligent speech interaction
Chat & pretrained large audio language model proposed by Alibaba Cloud
LLM-based Reinforcement Learning audio edit model
Multi-modal large language model designed for audio understanding
Official Python inference and LoRA trainer package
A Family of Open Sourced Music Foundation Models
Open-source multi-speaker long-form text-to-speech model
Qwen3-omni is a natively end-to-end, omni-modal LLM
A multimodal model for brain response prediction
Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model
Multimodal Diffusion with Representation Alignment
Capable of understanding text, audio, vision, video
Multimodal-Driven Architecture for Customized Video Generation
Controllable & emotion-expressive zero-shot TTS
Industrial-level controllable zero-shot text-to-speech system
A Systematic Framework for Interactive World Modeling
Qwen3-ASR is an open-source series of ASR models
Open Source Speech Language Model
Qwen3-TTS is an open-source series of TTS models
Python inference and LoRA trainer package for the LTX-2 audio–video
State-of-the-art TTS model under 25MB
VMZ: Model Zoo for Video Modeling
High-resolution models for human tasks