RGBD video generation model conditioned on camera input
High-Resolution Image Synthesis with Latent Diffusion Models
Capable of understanding text, audio, vision, video
A Customizable Image-to-Video Model based on HunyuanVideo
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Official implementation of Watermark Anything with Localized Messages
Code for running inference and finetuning with SAM 3 model
Contexts Optical Compression
GPT4V-level open-source multi-modal model based on Llama3-8B
Chinese and English multimodal conversational language model
Unified Multimodal Understanding and Generation Models
Sharp Monocular Metric Depth in Less Than a Second
Personalize Any Characters with a Scalable Diffusion Transformer
Official implementation of DreamCraft3D
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
Implementation of "MobileCLIP" CVPR 2024
Phi-3.5 for Mac: Locally-run Vision and Language Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Qwen3-omni is a natively end-to-end, omni-modal LLM
A Systematic Framework for Interactive World Modeling
A state-of-the-art open visual language model
PyTorch code and models for the DINOv2 self-supervised learning
Generate Any 3D Scene in Seconds
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning