The Triton Inference Server provides an optimized cloud
Easiest and laziest way for building multi-agent LLMs applications
Visual Instruction Tuning: Large Language-and-Vision Assistant
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Standardized Serverless ML Inference Platform on Kubernetes
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Operating LLMs in production
Serve machine learning models within a Docker container
Openai style api for open large language models
Large Language Model Text Generation Inference
Open-source tool designed to enhance the efficiency of workloads
Low-latency REST API for serving text-embeddings
Library for serving Transformers models on Amazon SageMaker
Toolkit for allowing inference and serving with MXNet in SageMaker
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code