Stable Virtual Camera: Generative View Synthesis with Diffusion Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
High-Fidelity and Controllable Generation of Textured 3D Assets
Multi-modal large language model designed for audio understanding
Inference script for Oasis 500M
GLM-4-Voice | End-to-End Chinese-English Conversational Model
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
LLM-based Reinforcement Learning audio edit model
A series of math-specific large language models of our Qwen2 series
Tiny vision language model
Chinese and English multimodal conversational language model
Repo of Qwen2-Audio chat & pretrained large audio language model
Inference code for scalable emulation of protein equilibrium ensembles
High-resolution models for human tasks
Towards Real-World Vision-Language Understanding
CLIP, Predict the most relevant text snippet given an image
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
4M: Massively Multimodal Masked Modeling
FAIR Sequence Modeling Toolkit 2
Official DeiT repository
Hackable and optimized Transformers building blocks
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
Stable Diffusion WebUI Forge is a platform on top of Stable Diffusion
DeepMind model for tracking arbitrary points across videos & robotics
code for Mesh R-CNN, ICCV 2019