Image generation model with single-stream diffusion transformer
Qwen-Image is a powerful image generation foundation model
GLM-Image: Auto-regressive for Dense-knowledge and High-fidelity Image
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
Official inference repo for FLUX.2 models
A Powerful Native Multimodal Model for Image Generation
Official DeiT repository
Models for object and human mesh reconstruction
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Official inference repo for FLUX.1 models
Chat & pretrained large vision language model
Wan2.1: Open and Advanced Large-Scale Video Generative Model
A Unified Framework for Text-to-3D and Image-to-3D Generation
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Diffusion Transformer with Fine-Grained Chinese Understanding
Generating Immersive, Explorable, and Interactive 3D Worlds
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Collection of Gemma 3 variants that are trained for performance
A SOTA open-source image editing model
CLIP, Predict the most relevant text snippet given an image
High-Resolution 3D Assets Generation with Large Scale Diffusion Models
Code for running inference with the SAM 3D Body Model 3DB
Reference PyTorch implementation and models for DINOv3
Towards Real-World Vision-Language Understanding
Multimodal-Driven Architecture for Customized Video Generation