Wan2.2
Wan2.2 is a major upgrade to the Wan suite of open video foundation models, introducing a Mixture‑of‑Experts (MoE) architecture that splits the diffusion denoising process across high‑noise and low‑noise expert paths to dramatically increase model capacity without raising inference cost. It harnesses meticulously labeled aesthetic data, covering lighting, composition, contrast, and color tone, to enable precise, controllable cinematic‑style video generation. Trained on over 65 % more images and 83 % more videos than its predecessor, Wan2.2 delivers top performance in motion, semantic, and aesthetic generalization. The release includes a compact, high‑compression TI2V‑5B model built on an advanced VAE with a 16×16×4 compression ratio, capable of text‑to‑video and image‑to‑video synthesis at 720p/24 fps on consumer GPUs such as the RTX 4090. Prebuilt checkpoints for T2V‑A14B, I2V‑A14B, and TI2V‑5B stack enable seamless integration.
Learn more
FramePack AI
FramePack AI revolutionizes video creation by enabling the generation of long, high-quality videos on consumer GPUs with just 6 GB of VRAM, using smart frame compression and bi-directional sampling to maintain constant computational load regardless of video length while avoiding drift and preserving visual fidelity. Key innovations include fixed context length to compress frames by importance, progressive frame compression for optimal memory use, and anti-drifting sampling to prevent error accumulation. Fully compatible with existing pretrained video diffusion models, FramePack accelerates training with large batch support and integrates seamlessly via fine-tuning under an Apache 2.0 open source license. Its user-friendly workflow lets creators upload an image or initial frame, set preferences for length, frame rate, and style, generate frames progressively, and preview or download final animations in real time.
Learn more
Runway Aleph
Runway Aleph is a state‑of‑the‑art in‑context video model that redefines multi‑task visual generation and editing by enabling a vast array of transformations on any input clip. It can seamlessly add, remove, or transform objects within a scene, generate new camera angles, and adjust style and lighting, all guided by natural‑language instructions or visual prompts. Built on cutting‑edge deep‑learning architectures and trained on diverse video datasets, Aleph operates entirely in context, understanding spatial and temporal relationships to maintain realism across edits. Users can apply complex effects, such as object insertion, background replacement, dynamic relighting, and style transfers, without needing separate tools for each task. The model’s intuitive interface integrates directly into Runway’s existing Gen‑4 ecosystem, offering an API for developers and a visual workspace for creators.
Learn more
Vace AI
Vace AI is an all-in-one AI video creation and editing platform designed to simplify every step from concept to production, enabling users to effortlessly generate professional-quality videos with advanced AI-driven effects and an intuitive workflow. With support for common formats such as MP4, MOV, and AVI, users upload source footage and select from a suite of AI-powered tools to seamlessly move, swap, stylize, resize, or animate any object, while advanced content, structure, subject, pose, and motion preservation technology ensures key visual elements remain intact. The drag-and-drop interface and intuitive controls let both beginners and professionals customize effect parameters, preview changes in real time, and refine outputs, and a single-click generate-and-download process delivers high-quality results ready for immediate use.
Learn more