Fast inference engine for Transformer models
A GPU-accelerated library containing highly optimized building blocks
MII makes low-latency and high-throughput inference possible
Replace OpenAI GPT with another LLM in your app
Serve machine learning models within a Docker container
Implementation of model parallel autoregressive transformers on GPUs
Toolkit for allowing inference and serving with MXNet in SageMaker