Wan2.1
Wan2.1 is an open-source suite of advanced video foundation models designed to push the boundaries of video generation. This cutting-edge model excels in various tasks, including Text-to-Video, Image-to-Video, Video Editing, and Text-to-Image, offering state-of-the-art performance across multiple benchmarks. Wan2.1 is compatible with consumer-grade GPUs, making it accessible to a broader audience, and supports multiple languages, including both Chinese and English for text generation. The model's powerful video VAE (Variational Autoencoder) ensures high efficiency and excellent temporal information preservation, making it ideal for generating high-quality video content. Its applications span across entertainment, marketing, and more.
Learn more
Crun.ai
Crun is a unified AI API platform that provides access to top video, image, and audio AI models through a single integration. It allows developers to use over 100 leading AI models without managing multiple APIs. Crun supports advanced use cases such as text-to-video, image-to-video, text-to-image, and AI audio generation. The platform is designed for fast integration, low latency, and high performance. With transparent, pay-as-you-go pricing, Crun helps teams reduce AI infrastructure costs. Developer-friendly documentation and examples make onboarding quick and simple. Crun enables businesses to build powerful multimodal AI applications efficiently.
Learn more
HunyuanOCR
Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data.
Learn more
Marengo
Marengo is a multimodal video foundation model that transforms video, audio, image, and text inputs into unified embeddings, enabling powerful “any-to-any” search, retrieval, classification, and analysis across vast video and multimedia libraries. It integrates visual frames (with spatial and temporal dynamics), audio (speech, ambient sound, music), and textual content (subtitles, overlays, metadata) to create a rich, multidimensional representation of each media item. With this embedding architecture, Marengo supports robust tasks such as search (text-to-video, image-to-video, video-to-audio, etc.), semantic content discovery, anomaly detection, hybrid search, clustering, and similarity-based recommendation. The latest versions introduce multi-vector embeddings, separating representations for appearance, motion, and audio/text features, which significantly improve precision and context awareness, especially for complex or long-form content.
Learn more