GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
High-Fidelity and Controllable Generation of Textured 3D Assets
Multi-modal large language model designed for audio understanding
Large Multimodal Models for Video Understanding and Editing
The official PyTorch implementation of Google's Gemma models
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
LLM-based Reinforcement Learning audio edit model
A state-of-the-art open visual language model
A series of math-specific large language models of our Qwen2 series
Inference script for Oasis 500M
Open-weight, large-scale hybrid-attention reasoning model
Chinese and English multimodal conversational language model
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
High-resolution models for human tasks
CLIP, Predict the most relevant text snippet given an image
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
4M: Massively Multimodal Masked Modeling
FAIR Sequence Modeling Toolkit 2
A Production-ready Reinforcement Learning AI Agent Library
Official DeiT repository
Repo of Qwen2-Audio chat & pretrained large audio language model
Stable Diffusion WebUI Forge is a platform on top of Stable Diffusion
DeepMind model for tracking arbitrary points across videos & robotics
code for Mesh R-CNN, ICCV 2019
Language modeling in a sentence representation space