State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Renderer for the harmony response format to be used with gpt-oss
LLM-based Reinforcement Learning audio edit model
Qwen3-omni is a natively end-to-end, omni-modal LLM
Chat & pretrained large vision language model
Tongyi Deep Research, the Leading Open-source Deep Research Agent
VMZ: Model Zoo for Video Modeling
High-resolution models for human tasks
Towards Real-World Vision-Language Understanding
CLIP, Predict the most relevant text snippet given an image
Ling is a MoE LLM provided and open-sourced by InclusionAI
Multimodal Diffusion with Representation Alignment
Personalize Any Characters with a Scalable Diffusion Transformer
Large Multimodal Models for Video Understanding and Editing
Fast and Universal 3D reconstruction model for versatile tasks
4M: Massively Multimodal Masked Modeling
This repository contains the official implementation of FastVLM
ICLR2024 Spotlight: curation/training code, metadata, distribution
A Production-ready Reinforcement Learning AI Agent Library
Official DeiT repository
PyTorch code and models for the DINOv2 self-supervised learning
Memory-efficient and performant finetuning of Mistral's models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Research code artifacts for Code World Model (CWM)
Diffusion Transformer with Fine-Grained Chinese Understanding