Large Language Model Text Generation Inference
State-of-the-art Parameter-Efficient Fine-Tuning
Low-latency REST API for serving text-embeddings
Run Local LLMs on Any Device. Open-source
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
A high-performance ML model serving framework, offers dynamic batching
MII makes low-latency and high-throughput inference possible
Unified Model Serving Framework
Pytorch domain library for recommendation systems
A library for accelerating Transformer models on NVIDIA GPUs
Deep learning optimization library: makes distributed training easy
High quality, fast, modular reference implementation of SSD in PyTorch
A computer vision framework to create and deploy apps in minutes