Library for serving Transformers models on Amazon SageMaker
AIMET is a library that provides advanced quantization and compression
Library for OCR-related tasks powered by Deep Learning
Unified Model Serving Framework
Everything you need to build state-of-the-art foundation models
An MLOps framework to package, deploy, monitor and manage models
Official inference library for Mistral models
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Standardized Serverless ML Inference Platform on Kubernetes
Neural Network Compression Framework for enhanced OpenVINO
Powering Amazon custom machine learning chips
Superduper: Integrate AI models and machine learning workflows
Open-source tool designed to enhance the efficiency of workloads
A unified framework for scalable computing
A set of Docker images for training and serving models in TensorFlow
Build your chatbot within minutes on your favorite device
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Simplifies the local serving of AI models from any source
The unofficial python package that returns response of Google Bard
OpenMMLab Model Deployment Framework
Framework that is dedicated to making neural data processing
A computer vision framework to create and deploy apps in minutes
LLMFlows - Simple, Explicit and Transparent LLM Apps
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Toolkit for allowing inference and serving with MXNet in SageMaker