Seedance
Seedance 1.0 API is officially live, giving creators and developers direct access to the world’s most advanced generative video model. Ranked #1 globally on the Artificial Analysis benchmark, Seedance delivers unmatched performance in both text-to-video and image-to-video generation. It supports multi-shot storytelling, allowing characters, styles, and scenes to remain consistent across transitions. Users can expect smooth motion, precise prompt adherence, and diverse stylistic rendering across photorealistic, cinematic, and creative outputs. The API provides a generous free trial with 2 million tokens and affordable pay-as-you-go pricing from just $1.8 per million tokens. With scalability and high concurrency support, Seedance enables studios, marketers, and enterprises to generate 5–10 second cinematic-quality videos in seconds.
Learn more
Viggle
Powered by JST-1, the first video-3D foundation model with actual physics understanding, starting from making any character move as you want. You can animate a static character with a text motion prompt. Viggle AI is something you've never seen before. Meme anyone, dance like a pro, star in your favorite movie scenes, and swap in your own characters, all made possible with Viggle's controllable video generation. Bring your creative scenarios to life, and share the enjoyable moments with loved ones. Upload a character image of any size, select a motion template from our library, and generate your video. Within minutes, see yourself or your friends perfectly blended into captivating scenes. For more control, upload both an image and a video to make the character mimic movements from your video, which is perfect for creating custom content. Enjoy laughs with friends and family by transforming them into meme-worthy animations.
Learn more
Act-Two
Act-Two enables animation of any character by transferring movements, expressions, and speech from a driving performance video onto a static image or reference video of your character. By selecting the Gen‑4 Video model and then the Act‑Two icon in Runway’s web interface, you supply two inputs; a performance video of an actor enacting your desired scene and a character input (either a single image or a video clip), and optionally enable gesture control to map hand and body movements onto character images. Act‑Two automatically adds environmental and camera motion to still images, supports a range of angles, non‑human subjects, and artistic styles, and retains original scene dynamics when using character videos (though with facial rather than full‑body gesture mapping). Users can adjust facial expressiveness on a sliding scale to balance natural motion with character consistency, preview results in real time, and generate high‑resolution clips up to 30 seconds long.
Learn more
CrazyTalk Animator
CrazyTalk Animator 3 (CTA3) is an animation solution that enables all levels of users to create professional animations and presentations with the least amount of effort. With CTA3, anyone can instantly bring an image, logo, or prop to life by applying bouncy elastic motion effects. For the character part, CTA3 is built with 2D character templates, vast motion libraries, a powerful 2D bone rig editor, facial puppets, and audio lip-syncing tools to give users unparalleled control when animating 2D talking characters for videos, web, games, apps, and presentations. animate 2D character. Animate 2D characters with 3D motions. Elastic and bouncy curve editing. Facial puppet and audio lip-syncing. 2D facial free-form deformation. 3D camera system and motion path and timeline editing. Motion curve and render style. Create 2D characters, 2D character rigging, and bone tools. Character templates for humans, animals, and more.
Learn more