AIMET is a library that provides advanced quantization and compression
Phi-3.5 for Mac: Locally-run Vision and Language Models
A Unified Library for Parameter-Efficient Learning
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
MII makes low-latency and high-throughput inference possible
Database system for building simpler and faster AI-powered application
PyTorch library of curated Transformer models and their components
Fast inference engine for Transformer models
Replace OpenAI GPT with another LLM in your app
State-of-the-art diffusion models for image and audio generation
An MLOps framework to package, deploy, monitor and manage models
A toolkit to optimize ML models for deployment for Keras & TensorFlow
High quality, fast, modular reference implementation of SSD in PyTorch
Unified Model Serving Framework
Low-latency REST API for serving text-embeddings
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
Deep learning optimization library: makes distributed training easy
LLM training code for MosaicML foundation models
A lightweight vision library for performing large object detection
Create HTML profiling reports from pandas DataFrame objects
Library for serving Transformers models on Amazon SageMaker
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Tensor search for humans
Powering Amazon custom machine learning chips