Wan2.5
Wan2.5-Preview introduces a next-generation multimodal architecture designed to redefine visual generation across text, images, audio, and video. Its unified framework enables seamless multimodal inputs and outputs, powering deeper alignment through joint training across all media types. With advanced RLHF tuning, the model delivers superior video realism, expressive motion dynamics, and improved adherence to human preferences. Wan2.5 also excels in synchronized audio-video generation, supporting multi-voice output, sound effects, and cinematic-grade visuals. On the image side, it offers exceptional instruction following, creative design capabilities, and pixel-accurate editing for complex transformations. Together, these features make Wan2.5-Preview a breakthrough platform for high-fidelity content creation and multimodal storytelling.
Learn more
Qwen-Image
Qwen-Image is a multimodal diffusion transformer (MMDiT) foundation model offering state-of-the-art image generation, text rendering, editing, and understanding. It excels at complex text integration, seamlessly embedding alphabetic and logographic scripts into visuals with typographic fidelity, and supports diverse artistic styles from photorealism to impressionism, anime, and minimalist design. Beyond creation, it enables advanced image editing operations such as style transfer, object insertion or removal, detail enhancement, in-image text editing, and human pose manipulation through intuitive prompts. Its built-in vision understanding tasks, including object detection, semantic segmentation, depth and edge estimation, novel view synthesis, and super-resolution, extend its capabilities into intelligent visual comprehension. Qwen-Image is accessible via popular libraries like Hugging Face Diffusers and integrates prompt-enhancement tools for multilingual support.
Learn more
FLUX.2 [klein]
FLUX.2 [klein] is the fastest member of the FLUX.2 family of AI image models, designed to unify text-to-image generation, image editing, and multi-reference composition into a single compact architecture that delivers state-of-the-art visual quality at sub-second inference times on modern GPUs, making it suitable for real-time and latency-critical applications. It supports both generation from prompts and editing existing images with references, combining high diversity and photorealistic outputs with extremely low latency so users can iterate quickly in interactive workflows; distilled versions can produce or edit images in under 0.5 seconds on capable hardware, and even compact 4 B variants run on consumer GPUs with about 8–13 GB of VRAM. The FLUX.2 [klein] family comes in different variants, including distilled and base versions at 9 B and 4 B parameter scales, giving developers options for local deployment, fine-tuning, research, and production integration.
Learn more
GPT Image 1.5
GPT Image 1.5 is OpenAI’s state-of-the-art image generation model built for precise, high-quality visual creation. It supports both text and image inputs and produces image or text outputs with strong adherence to prompts. The model improves instruction following, enabling more accurate image generation and editing results. GPT Image 1.5 is designed for professional and creative use cases that require reliability and visual consistency. It is available through multiple API endpoints, including image generation and image editing. Pricing is token-based, with separate rates for text and image inputs and outputs. GPT Image 1.5 offers a powerful foundation for developers building image-focused applications.
Learn more