Uncommon Objects in 3D dataset
Powerful AI language model (MoE) optimized for efficiency/performance
Open-source, high-performance AI model with advanced reasoning
A Powerful Native Multimodal Model for Image Generation
MiMo-V2-Flash: Efficient Reasoning, Coding, and Agentic Foundation
MiniMax-M2, a model built for Max coding & agentic workflows
Large-language-model & vision-language-model based on Linear Attention
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
Towards self-verifiable mathematical reasoning
Sharp Monocular Metric Depth in Less Than a Second
4M: Massively Multimodal Masked Modeling
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Towards Ultimate Expert Specialization in Mixture-of-Experts Language
Open-source, high-performance Mixture-of-Experts large language model
Runtime extension of Proximus enabling Deployment on AMD Ryzen™ AI
Per-Pixel Classification is Not All You Need for Semantic Segmentation
Large language model developed and released by NVIDIA
Metric monocular depth estimation (vision model)
Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens
Efficient 13B MoE language model with long context and reasoning modes