...It pairs a 13.5B-parameter language model with a 0.4B vision encoder, enabling strong multimodal reasoning across both text and images. This version is specifically post-trained for reasoning tasks, making it highly effective for math, coding, STEM workloads, and complex multi-step problem-solving. Despite its scale, the model is engineered for practical deployment and can run locally on 32GB of VRAM in BF16 or under 24GB when quantized. It maintains robust system-prompt adherence, supports dozens of languages, and provides native function calling with clean JSON output for agentic workflows. The model's architecture also delivers a 256k context window, unlocking large-document analysis and long-form reasoning.