NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
Tooling for the Common Objects In 3D dataset
Renderer for the harmony response format to be used with gpt-oss
Pushing the Limits of Mathematical Reasoning in Open Language Models
Language modeling in a sentence representation space
LTX-Video Support for ComfyUI
A Pragmatic VLA Foundation Model
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
ICLR2024 Spotlight: curation/training code, metadata, distribution
Memory-efficient and performant finetuning of Mistral's models
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Qwen2.5-VL is the multimodal large language model series
Pokee Deep Research Model Open Source Repo
HY-Motion model for 3D character animation generation
Collection of Gemma 3 variants that are trained for performance
Large Multimodal Models for Video Understanding and Editing
Unified Multimodal Understanding and Generation Models
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
The ChatGPT Retrieval Plugin lets you easily find personal documents
Open Source Speech Language Model
Implementation of "MobileCLIP" CVPR 2024
CLIP, Predict the most relevant text snippet given an image
Pretrained time-series foundation model developed by Google Research
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI