Qwen-Image is a powerful image generation foundation model
Foundation model for image generation
General-purpose image editing model that delivers high-fidelity
Qwen-Image-Layered: Layered Decomposition for Inherent Editablity
CLIP, Predict the most relevant text snippet given an image
A Powerful Native Multimodal Model for Image Generation
Official inference repo for FLUX.2 models
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
A Unified Framework for Text-to-3D and Image-to-3D Generation
Wan2.2: Open and Advanced Large-Scale Video Generative Model
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Official inference repo for FLUX.1 models
Chat & pretrained large vision language model
Multimodal-Driven Architecture for Customized Video Generation
Collection of Gemma 3 variants that are trained for performance
Contexts Optical Compression
Towards Real-World Vision-Language Understanding
Generating Immersive, Explorable, and Interactive 3D Worlds
Diffusion Transformer with Fine-Grained Chinese Understanding
Qwen3-omni is a natively end-to-end, omni-modal LLM
Capable of understanding text, audio, vision, video
Code for running inference and finetuning with SAM 3 model
High-Resolution Image Synthesis with Latent Diffusion Models
Fast stable diffusion on CPU and AI PC
CogView4, CogView3-Plus and CogView3(ECCV 2024)