Audio foundation model excelling in audio understanding
Repo of Qwen2-Audio chat & pretrained large audio language model
Chat & pretrained large audio language model proposed by Alibaba Cloud
Official Python inference and LoRA trainer package
A Family of Open Sourced Music Foundation Models
Open-source multi-speaker long-form text-to-speech model
A multimodal model for brain response prediction
Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model
Multimodal Diffusion with Representation Alignment
Capable of understanding text, audio, vision, video
Qwen3-omni is a natively end-to-end, omni-modal LLM
Controllable & emotion-expressive zero-shot TTS
Multimodal-Driven Architecture for Customized Video Generation
Industrial-level controllable zero-shot text-to-speech system
A Systematic Framework for Interactive World Modeling
Qwen3-ASR is an open-source series of ASR models
Open Source Speech Language Model
Python inference and LoRA trainer package for the LTX-2 audio–video
Qwen3-TTS is an open-source series of TTS models
State-of-the-art TTS model under 25MB
High-resolution models for human tasks
Official repository for LTX-Video
Foundational Models for State-of-the-Art Speech and Text Translation
Large Multimodal Models for Video Understanding and Editing
GLM-4-Voice | End-to-End Chinese-English Conversational Model