Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
Everything you need to build state-of-the-art foundation models
Bring the notion of Model-as-a-Service to life
Official inference library for Mistral models
State-of-the-art diffusion models for image and audio generation
Unified Model Serving Framework
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
The official Python client for the Huggingface Hub
Replace OpenAI GPT with another LLM in your app
Easiest and laziest way for building multi-agent LLMs applications
Low-latency REST API for serving text-embeddings
A Pythonic framework to simplify AI service building
Training and deploying machine learning models on Amazon SageMaker
Simplifies the local serving of AI models from any source
Operating LLMs in production
Data manipulation and transformation for audio signal processing
The Triton Inference Server provides an optimized cloud
FlashInfer: Kernel Library for LLM Serving
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
Uncover insights, surface problems, monitor, and fine tune your LLM
A set of Docker images for training and serving models in TensorFlow
A library for accelerating Transformer models on NVIDIA GPUs