Qwen3-TTS is an open-source series of TTS models
Open Source Speech Language Model
Qwen3-omni is a natively end-to-end, omni-modal LLM
Industrial-level controllable zero-shot text-to-speech system
Long-form streaming TTS system for multi-speaker dialogue generation
Capable of understanding text, audio, vision, video
GLM-4-Voice | End-to-End Chinese-English Conversational Model
Controllable & emotion-expressive zero-shot TTS
State-of-the-art TTS model under 25MB
Open-source multi-speaker long-form text-to-speech model
Qwen3-ASR is an open-source series of ASR models
Audio foundation model excelling in audio understanding
Open-source framework for intelligent speech interaction
LLM-based Reinforcement Learning audio edit model
Open-source industrial-grade ASR models
Repo of Qwen2-Audio chat & pretrained large audio language model
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Multi-modal large language model designed for audio understanding
A Conversational Speech Generation Model
Official Python inference and LoRA trainer package
Chat & pretrained large audio language model proposed by Alibaba Cloud
FAIR Sequence Modeling Toolkit 2
PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)
Dia-1.6B generates lifelike English dialogue and vocal expressions