Reference PyTorch implementation and models for DINOv3
Wan2.2: Open and Advanced Large-Scale Video Generative Model
A Unified Framework for Text-to-3D and Image-to-3D Generation
A Systematic Framework for Interactive World Modeling
RGBD video generation model conditioned on camera input
Inference framework for 1-bit LLMs
Python inference and LoRA trainer package for the LTX-2 audio–video
Official repository for LTX-Video
A Powerful Native Multimodal Model for Image Generation
Generating Immersive, Explorable, and Interactive 3D Worlds
Multimodal-Driven Architecture for Customized Video Generation
High-resolution models for human tasks
LTX-Video Support for ComfyUI
FAIR Sequence Modeling Toolkit 2
ChatGPT interface with better UI
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Video understanding codebase from FAIR for reproducing video models
Unified Multimodal Understanding and Generation Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Personalize Any Characters with a Scalable Diffusion Transformer
4M: Massively Multimodal Masked Modeling
ICLR2024 Spotlight: curation/training code, metadata, distribution
PyTorch code and models for the DINOv2 self-supervised learning
Official implementation of DreamCraft3D
A Customizable Image-to-Video Model based on HunyuanVideo