VideoPoet
VideoPoet is a simple modeling method that can convert any autoregressive language model or large language model (LLM) into a high-quality video generator. It contains a few simple components. An autoregressive language model learns across video, image, audio, and text modalities to autoregressively predict the next video or audio token in the sequence. A mixture of multimodal generative learning objectives are introduced into the LLM training framework, including text-to-video, text-to-image, image-to-video, video frame continuation, video inpainting and outpainting, video stylization, and video-to-audio. Furthermore, such tasks can be composed together for additional zero-shot capabilities. This simple recipe shows that language models can synthesize and edit videos with a high degree of temporal consistency.
Learn more
HunyuanOCR
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data.
Learn more
WaveSpeedAI
WaveSpeedAI is a high-performance generative media platform built to dramatically accelerate image, video, and audio creation by combining cutting-edge multimodal models with an ultra-fast inference engine. It supports a wide array of creative workflows, from text-to-video and image-to-video to text-to-image, voice generation, and 3D asset creation, through a unified API designed for scale and speed. The platform integrates top-tier foundation models such as WAN 2.1/2.2, Seedream, FLUX, and HunyuanVideo, and provides streamlined access to a vast model library. Users benefit from blazing-fast generation times, real-time throughput, and enterprise-grade reliability while retaining high-quality output. WaveSpeedAI emphasises “fast, vast, efficient” performance; fast generation of creative assets, access to a wide-ranging set of state-of-the-art models, and cost-efficient execution without sacrificing quality.
Learn more
Seaweed
Seaweed is a foundational AI model for video generation developed by ByteDance. It utilizes a diffusion transformer architecture with approximately 7 billion parameters, trained on a compute equivalent to 1,000 H100 GPUs. Seaweed learns world representations from vast multi-modal data, including video, image, and text, enabling it to create videos of various resolutions, aspect ratios, and durations from text descriptions. It excels at generating lifelike human characters exhibiting diverse actions, gestures, and emotions, as well as a wide variety of landscapes with intricate detail and dynamic composition. Seaweed offers enhanced controls, allowing users to generate videos from images by providing an initial frame to guide consistent motion and style throughout the video. It can also condition on both the first and last frames to create transition videos, and be fine-tuned to generate videos based on reference images.
Learn more