ZAYA1-8B is a compact Mixture-of-Experts reasoning model developed by Zyphra, designed to deliver unusually high intelligence density with fewer than 1 billion active parameters. The model contains 8.4B total parameters with around 760M active during inference, allowing it to achieve strong reasoning, mathematics, and coding performance while remaining lightweight enough for efficient local or on-device deployment. ZAYA1-8B is optimized for long-form reasoning and test-time compute workflows, making it particularly effective for mathematical problem solving, coding tasks, and advanced reasoning chains. It introduces architectural innovations such as Compressed Convolutional Attention, a novel MLP-based expert router, and learned residual scaling to improve routing stability and inference efficiency. The model was trained entirely on AMD infrastructure and refined through supervised fine-tuning and multi-stage reinforcement learning focused on reasoning and coding.
Features
- 8.4B-parameter Mixture-of-Experts architecture
- Only ~760M active parameters during inference
- Optimized for reasoning, mathematics, and coding tasks
- Efficient enough for local and on-device deployment
- Compressed Convolutional Attention for efficient inference
- Novel MLP-based expert router for routing stability
- Trained entirely on AMD hardware infrastructure
- Apache 2.0 open-weight release with broad compatibility