Open-source, high-performance AI model with advanced reasoning
Powerful AI language model (MoE) optimized for efficiency/performance
A Powerful Native Multimodal Model for Image Generation
MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation
MiniMax-M2, a model built for Max coding & agentic workflows
Large-language-model & vision-language-model based on Linear Attention
Sharp Monocular Metric Depth in Less Than a Second
Towards self-verifiable mathematical reasoning
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
4M: Massively Multimodal Masked Modeling
Towards Ultimate Expert Specialization in Mixture-of-Experts Language
Open-source, high-performance Mixture-of-Experts large language model
Large language model developed and released by NVIDIA
Metric monocular depth estimation (vision model)
Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens
Efficient 13B MoE language model with long context and reasoning modes