Reference PyTorch implementation and models for DINOv3
A Unified Framework for Text-to-3D and Image-to-3D Generation
Wan2.2: Open and Advanced Large-Scale Video Generative Model
A Systematic Framework for Interactive World Modeling
RGBD video generation model conditioned on camera input
Inference framework for 1-bit LLMs
Python inference and LoRA trainer package for the LTX-2 audio–video
Real-time behaviour synthesis with MuJoCo, using Predictive Control
Official repository for LTX-Video
A Powerful Native Multimodal Model for Image Generation
Generating Immersive, Explorable, and Interactive 3D Worlds
Multimodal-Driven Architecture for Customized Video Generation
High-resolution models for human tasks
LTX-Video Support for ComfyUI
FAIR Sequence Modeling Toolkit 2
code for Mesh R-CNN, ICCV 2019
CogView4, CogView3-Plus and CogView3(ECCV 2024)
Video understanding codebase from FAIR for reproducing video models
Unified Multimodal Understanding and Generation Models
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Official implementation of Watermark Anything with Localized Messages
Personalize Any Characters with a Scalable Diffusion Transformer
4M: Massively Multimodal Masked Modeling
ICLR2024 Spotlight: curation/training code, metadata, distribution
PyTorch code and models for the DINOv2 self-supervised learning