A lightweight vision library for performing large object detection
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
A GPU-accelerated library containing highly optimized building blocks
MII makes low-latency and high-throughput inference possible
Fast inference engine for Transformer models
Replace OpenAI GPT with another LLM in your app
Serve machine learning models within a Docker container
Implementation of model parallel autoregressive transformers on GPUs
Lightweight anchor-free object detection model
Toolkit for allowing inference and serving with MXNet in SageMaker