Ling is a MoE LLM provided and open-sourced by InclusionAI
A Unified Framework for Text-to-3D and Image-to-3D Generation
Multimodal-Driven Architecture for Customized Video Generation
Personalize Any Characters with a Scalable Diffusion Transformer
Qwen2.5-VL is the multimodal large language model series
High-Fidelity and Controllable Generation of Textured 3D Assets
Official code for Style Aligned Image Generation via Shared Attention
4M: Massively Multimodal Masked Modeling
FAIR Sequence Modeling Toolkit 2
ICLR2024 Spotlight: curation/training code, metadata, distribution
A Production-ready Reinforcement Learning AI Agent Library
A PyTorch library for implementing flow matching algorithms
Official DeiT repository
Memory-efficient and performant finetuning of Mistral's models
Diffusion Transformer with Fine-Grained Chinese Understanding
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
GLM-4 series: Open Multilingual Multimodal Chat LMs
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Reproduction of Poetiq's record-breaking submission to the ARC-AGI-1
Pokee Deep Research Model Open Source Repo
DeepMind model for tracking arbitrary points across videos & robotics
Global weather forecasting model using graph neural networks and JAX
Tooling for the Common Objects In 3D dataset
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
GPT4V-level open-source multi-modal model based on Llama3-8B