Ultravox is an open source multimodal large language model designed specifically for real-time voice-based interactions. It is built to process both text and spoken audio directly, eliminating the need for a separate speech recognition stage and enabling more seamless conversational experiences. Ultravox works by combining text prompts with encoded audio inputs, allowing it to understand spoken language alongside written instructions in a unified pipeline. Internally, it leverages pretrained language models and speech encoders, with a multimodal adapter that integrates both modalities for inference and training. Ultravox is optimized for low latency, achieving fast response times suitable for interactive voice agents and real-time applications. It supports use cases such as conversational AI agents, speech-to-speech translation, and analysis of spoken audio content. Ultravox also includes tooling and configuration systems for training, evaluation, and dataset integration.
Features
- Multimodal input handling for both speech and text in one model
- No separate speech recognition step required for audio processing
- Real-time performance with low latency response generation
- Integration with pretrained language and speech encoder backbones
- Configurable training, evaluation, and dataset pipeline support
- Suitable for voice agents, translation, and audio understanding tasks