State-of-the-art diffusion models for image and audio generation
Bring the notion of Model-as-a-Service to life
Multilingual Automatic Speech Recognition with word-level timestamps
Open platform for training, serving, and evaluating language models
PyTorch extensions for fast R&D prototyping and Kaggle farming
The Triton Inference Server provides an optimized cloud
Open-source tool designed to enhance the efficiency of workloads
MII makes low-latency and high-throughput inference possible
GPU environment management and cluster orchestration
Phi-3.5 for Mac: Locally-run Vision and Language Models
A Unified Library for Parameter-Efficient Learning
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
Replace OpenAI GPT with another LLM in your app
An MLOps framework to package, deploy, monitor and manage models
Deep learning optimization library: makes distributed training easy
PyTorch library of curated Transformer models and their components
High quality, fast, modular reference implementation of SSD in PyTorch
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Unified Model Serving Framework
Low-latency REST API for serving text-embeddings
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
LLM training code for MosaicML foundation models
A lightweight vision library for performing large object detection