Unified Model Serving Framework
A unified framework for scalable computing
Easiest and laziest way for building multi-agent LLMs applications
GPU environment management and cluster orchestration
FlashInfer: Kernel Library for LLM Serving
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
AIMET is a library that provides advanced quantization and compression
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Integrate, train and manage any AI models and APIs with your database
State-of-the-art diffusion models for image and audio generation
Neural Network Compression Framework for enhanced OpenVINO
Large Language Model Text Generation Inference
PyTorch library of curated Transformer models and their components
MII makes low-latency and high-throughput inference possible
A library for accelerating Transformer models on NVIDIA GPUs
LLM training code for MosaicML foundation models
Superduper: Integrate AI models and machine learning workflows
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Data manipulation and transformation for audio signal processing
Replace OpenAI GPT with another LLM in your app
Sparsity-aware deep learning inference runtime for CPUs
The Triton Inference Server provides an optimized cloud
Standardized Serverless ML Inference Platform on Kubernetes
Powering Amazon custom machine learning chips