WaveSpeedAI
WaveSpeedAI is a high-performance generative media platform built to dramatically accelerate image, video, and audio creation by combining cutting-edge multimodal models with an ultra-fast inference engine. It supports a wide array of creative workflows, from text-to-video and image-to-video to text-to-image, voice generation, and 3D asset creation, through a unified API designed for scale and speed. The platform integrates top-tier foundation models such as WAN 2.1/2.2, Seedream, FLUX, and HunyuanVideo, and provides streamlined access to a vast model library. Users benefit from blazing-fast generation times, real-time throughput, and enterprise-grade reliability while retaining high-quality output. WaveSpeedAI emphasises “fast, vast, efficient” performance; fast generation of creative assets, access to a wide-ranging set of state-of-the-art models, and cost-efficient execution without sacrificing quality.
Learn more
HunyuanVideo-Avatar
HunyuanVideo‑Avatar supports animating any input avatar images to high‑dynamic, emotion‑controllable videos using simple audio conditions. It is a multimodal diffusion transformer (MM‑DiT)‑based model capable of generating dynamic, emotion‑controllable, multi‑character dialogue videos. It accepts multi‑style avatar inputs, photorealistic, cartoon, 3D‑rendered, anthropomorphic, at arbitrary scales from portrait to full body. Provides a character image injection module that ensures strong character consistency while enabling dynamic motion; an Audio Emotion Module (AEM) that extracts emotional cues from a reference image to enable fine‑grained emotion control over generated video; and a Face‑Aware Audio Adapter (FAA) that isolates audio influence to specific face regions via latent‑level masking, supporting independent audio‑driven animation in multi‑character scenarios.
Learn more
VideoPoet
VideoPoet is a simple modeling method that can convert any autoregressive language model or large language model (LLM) into a high-quality video generator. It contains a few simple components. An autoregressive language model learns across video, image, audio, and text modalities to autoregressively predict the next video or audio token in the sequence. A mixture of multimodal generative learning objectives are introduced into the LLM training framework, including text-to-video, text-to-image, image-to-video, video frame continuation, video inpainting and outpainting, video stylization, and video-to-audio. Furthermore, such tasks can be composed together for additional zero-shot capabilities. This simple recipe shows that language models can synthesize and edit videos with a high degree of temporal consistency.
Learn more
Seaweed
Seaweed is a foundational AI model for video generation developed by ByteDance. It utilizes a diffusion transformer architecture with approximately 7 billion parameters, trained on a compute equivalent to 1,000 H100 GPUs. Seaweed learns world representations from vast multi-modal data, including video, image, and text, enabling it to create videos of various resolutions, aspect ratios, and durations from text descriptions. It excels at generating lifelike human characters exhibiting diverse actions, gestures, and emotions, as well as a wide variety of landscapes with intricate detail and dynamic composition. Seaweed offers enhanced controls, allowing users to generate videos from images by providing an initial frame to guide consistent motion and style throughout the video. It can also condition on both the first and last frames to create transition videos, and be fine-tuned to generate videos based on reference images.
Learn more