Xiaomi MiMo-V2.5 is an advanced open-source AI model designed to combine strong agentic capabilities with native multimodal understanding. It can process and reason across text, images, and audio within a single unified system. The model uses a sparse Mixture-of-Experts architecture with hundreds of billions of parameters for efficient performance. It supports an extended context window of up to one million tokens, enabling long and complex workflows. MiMo-V2.5 is built to handle tasks such as coding, reasoning, and multimodal analysis with high accuracy. It incorporates dedicated visual and audio encoders to enhance perception and cross-modal reasoning. The model demonstrates strong benchmark performance across coding, reasoning, and multimodal tasks. By combining multimodality, efficiency, and agentic intelligence, MiMo-V2.5 advances the capabilities of open-source AI systems.