Trainable models and NN optimization tools
Train machine learning models within Docker containers
High-level training, data augmentation, and utilities for Pytorch
A Next-Generation Training Engine Built for Ultra-Large MoE Models
AI agents running research on single-GPU nanochat training
A lightweight library for PyTorch training tools and utilities
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
slime is an LLM post-training framework for RL Scaling
Supercharge Your Model Training
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
State-of-the-art 2D and 3D Face Analysis Project
Reference PyTorch implementation and models for DINOv3
AI agents autonomously run and improve ML experiments overnight
Training data (data labeling, annotation, workflow) for all data types
An open-source, modern-design AI training tracking and visualization
An open source implementation of CLIP
A simple, performant and scalable Jax LLM
Unified web UI for training and running open models locally
Train a 26M-parameter GPT from scratch in just 2h
Powerful AI language model (MoE) optimized for efficiency/performance
Faster and easier training and deployments
Training Large Language Model to Reason in a Continuous Latent Space
Deep learning optimization library making distributed training easy
The official repository for ERNIE 4.5 and ERNIEKit
Training PyTorch models with differential privacy