Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Inference framework for 1-bit LLMs
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Chat & pretrained large vision language model
Inference code for scalable emulation of protein equilibrium ensembles
Memory-efficient and performant finetuning of Mistral's models
Qwen3-omni is a natively end-to-end, omni-modal LLM
Official implementation of Watermark Anything with Localized Messages
Video understanding codebase from FAIR for reproducing video models
Towards Real-World Vision-Language Understanding
A SOTA open-source image editing model
Fast and Universal 3D reconstruction model for versatile tasks
A PyTorch library for implementing flow matching algorithms
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
GPT4V-level open-source multi-modal model based on Llama3-8B
NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
The ChatGPT Retrieval Plugin lets you easily find personal documents
Ling is a MoE LLM provided and open-sourced by InclusionAI
Phi-3.5 for Mac: Locally-run Vision and Language Models
Revolutionizing Database Interactions with Private LLM Technology
Open-source framework for intelligent speech interaction
Implementation of the Surya Foundation Model for Heliophysics
Stable Virtual Camera: Generative View Synthesis with Diffusion Models
Large-language-model & vision-language-model based on Linear Attention
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning