A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Phi-3.5 for Mac: Locally-run Vision and Language Models
Serve machine learning models within a Docker container
Fast inference engine for Transformer models
Standardized Serverless ML Inference Platform on Kubernetes
Framework for Accelerating LLM Generation with Multiple Decoding Heads
A computer vision framework to create and deploy apps in minutes
Toolbox of models, callbacks, and datasets for AI/ML researchers
Implementation of model parallel autoregressive transformers on GPUs
Sequence-to-sequence framework, focused on Neural Machine Translation
Guide to deploying deep-learning inference networks
Toolkit for allowing inference and serving with MXNet in SageMaker
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code