Reference PyTorch implementation and models for DINOv3
A Systematic Framework for Interactive World Modeling
A Unified Framework for Text-to-3D and Image-to-3D Generation
RGBD video generation model conditioned on camera input
Python inference and LoRA trainer package for the LTX-2 audio–video
A Powerful Native Multimodal Model for Image Generation
LTX-Video Support for ComfyUI
Video understanding codebase from FAIR for reproducing video models
High-resolution models for human tasks
Official repository for LTX-Video
Open-Source Financial Large Language Models
Continuous Autonomy for the AI SDK
Controllable & emotion-expressive zero-shot TTS
Unified Multimodal Understanding and Generation Models
PyTorch code and models for the DINOv2 self-supervised learning
Official implementation of DreamCraft3D
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Block Diffusion for Ultra-Fast Speculative Decoding
Multimodal-Driven Architecture for Customized Video Generation
Large-language-model & vision-language-model based on Linear Attention
4M: Massively Multimodal Masked Modeling
ICLR2024 Spotlight: curation/training code, metadata, distribution
A Customizable Image-to-Video Model based on HunyuanVideo
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Repo for external large-scale work