...It pairs a 3.4B-parameter language model with a 0.4B-parameter vision encoder, enabling it to understand both text and image inputs. This reasoning-tuned variant is optimized for tasks like math, coding, and other STEM-related problem solving, making it suitable for applications that require logical reasoning, analysis, or structured thinking. Despite its modest size, the model is designed for edge deployment and can run locally, fitting in ~16 GB of VRAM in BF16 or under 8 GB of RAM/VRAM when quantized. It supports dozens of languages, allowing it to function across global and multilingual contexts. The model retains strong system-prompt adherence, supports function-calling with structured JSON output, and offers a large 256k token context window for extended context reasoning.