Mulan
Mulan is an AI-powered creative platform that lets users generate high-quality visuals, videos, and branding assets without complex software or a physical studio. It can instantly produce e-commerce product shots in professional settings, create consistent movie storyboards with maintained character and style continuity, and turn simple inputs into dynamic short videos for intellectual property or marketing campaigns. It also offers tools to replicate styles from uploaded images by converting them into prompt guidance, replace or insert characters in video clips, and transform logos into creative animated posters and iconography. Users can build full visual kits from a single image, replace clothing in pictures with one click, and generate meme-ready sticker packs, all through intuitive AI workflows and template-driven processes. Mulan simplifies traditionally time-intensive tasks like commercial video production, branding visuals, and storyboard planning.
Learn more
Wan2.2-Animate
Wan2.2 Animate is a specialized module within the Wan video generation framework designed for high-fidelity character animation and character replacement, enabling users to transform static images into dynamic videos or swap subjects within existing footage while preserving realism and motion consistency. It works by taking two primary inputs: a reference image that defines the character’s appearance and a reference video that provides motion, expressions, and scene context. Using this combination, it can animate a still character by replicating body movements, gestures, and facial expressions from the source video, or replace the original subject in a video while maintaining the original lighting, camera movement, and environment for seamless integration. It relies on advanced techniques such as spatially aligned skeleton signals and implicit facial feature extraction to accurately reproduce motion and expressions.
Learn more
Elser AI
Elser AI is an all-in-one AI animation and creative studio that transforms text, images, and ideas into complete visual stories, anime, comics, and short movies by unifying scriptwriting, character design, storyboarding, voiceover, animation, editing, and sound generation in a single platform, so users no longer need to switch between multiple tools or workflows. It lets creators start with a simple description or photo prompt and automatically generates coherent anime art, original characters, dynamic scenes, and full-length shorts with motion, emotion, and consistent visual style, offering more than 200 templates and 40+ creation tools that cover script and storyboard generation, character creation, camera control, and synchronized voice and music production to build narrative content quickly and efficiently. It supports turning concepts into professional animated shorts in minutes, with built-in AI models that handle everything from script and scene structure to voiceovers.
Learn more
Act-Two
Act-Two enables animation of any character by transferring movements, expressions, and speech from a driving performance video onto a static image or reference video of your character. By selecting the Gen‑4 Video model and then the Act‑Two icon in Runway’s web interface, you supply two inputs; a performance video of an actor enacting your desired scene and a character input (either a single image or a video clip), and optionally enable gesture control to map hand and body movements onto character images. Act‑Two automatically adds environmental and camera motion to still images, supports a range of angles, non‑human subjects, and artistic styles, and retains original scene dynamics when using character videos (though with facial rather than full‑body gesture mapping). Users can adjust facial expressiveness on a sliding scale to balance natural motion with character consistency, preview results in real time, and generate high‑resolution clips up to 30 seconds long.
Learn more