Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
Everything you need to build state-of-the-art foundation models
Standardized Serverless ML Inference Platform on Kubernetes
Optimizing inference proxy for LLMs
The Triton Inference Server provides an optimized cloud
Library for OCR-related tasks powered by Deep Learning
A set of Docker images for training and serving models in TensorFlow
Deep learning optimization library: makes distributed training easy
Bring the notion of Model-as-a-Service to life
Single-cell analysis in Python
A Pythonic framework to simplify AI service building
Operating LLMs in production
FlashInfer: Kernel Library for LLM Serving
Sparsity-aware deep learning inference runtime for CPUs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Official inference library for Mistral models
Data manipulation and transformation for audio signal processing
Replace OpenAI GPT with another LLM in your app
Uncover insights, surface problems, monitor, and fine tune your LLM
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
OpenMMLab Model Deployment Framework
DoWhy is a Python library for causal inference