Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
Fast inference engine for Transformer models
MII makes low-latency and high-throughput inference possible
A GPU-accelerated library containing highly optimized building blocks
Replace OpenAI GPT with another LLM in your app
Serve machine learning models within a Docker container
Lightweight anchor-free object detection model
Implementation of model parallel autoregressive transformers on GPUs
Toolkit for allowing inference and serving with MXNet in SageMaker