A Customizable Image-to-Video Model based on HunyuanVideo
LTX-Video Support for ComfyUI
PyTorch code and models for the DINOv2 self-supervised learning
Official implementation of DreamCraft3D
Block Diffusion for Ultra-Fast Speculative Decoding
Open-source framework for intelligent speech interaction
Personalize Any Characters with a Scalable Diffusion Transformer
CogView4, CogView3-Plus and CogView3(ECCV 2024)
HY-Motion model for 3D character animation generation
High-Fidelity and Controllable Generation of Textured 3D Assets
Collection of Gemma 3 variants that are trained for performance
An Efficient Agentic Model for Computer Use
Unified Multimodal Understanding and Generation Models
Generating Immersive, Explorable, and Interactive 3D Worlds
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Open-source industrial-grade ASR models
Official implementation of Watermark Anything with Localized Messages
Video understanding codebase from FAIR for reproducing video models
Multimodal-Driven Architecture for Customized Video Generation
4M: Massively Multimodal Masked Modeling
ICLR2024 Spotlight: curation/training code, metadata, distribution
A trainable PyTorch reproduction of AlphaFold 3
Controllable & emotion-expressive zero-shot TTS
VGGSfM: Visual Geometry Grounded Deep Structure From Motion
State-of-the-art Image & Video CLIP, Multimodal Large Language Models