The Triton Inference Server provides an optimized cloud
Library for serving Transformers models on Amazon SageMaker
Deep learning optimization library: makes distributed training easy
MII makes low-latency and high-throughput inference possible
A set of Docker images for training and serving models in TensorFlow
Powering Amazon custom machine learning chips
Library for OCR-related tasks powered by Deep Learning
Trainable models and NN optimization tools
Audiocraft is a library for audio processing and generation
Probabilistic reasoning and statistical analysis in TensorFlow
Pre-trained Deep Learning models and demos
A unified framework for scalable computing
ImageBind One Embedding Space to Bind Them All
OpenMMLab Model Deployment Framework
A computer vision framework to create and deploy apps in minutes
Implementation of model parallel autoregressive transformers on GPUs
Toolkit for allowing inference and serving with MXNet in SageMaker
Accelerated deep learning R&D
Deep learning PyTorch library for time series forecasting
A model library for exploring state-of-the-art deep learning
Toolkit for running MXNet training scripts on SageMaker
We estimate dense, flicker-free, geometrically consistent depth
Tools to help users inter-operate among deep learning frameworks
A natural language modeling framework based on PyTorch
A collaboratively written review paper on deep learning, genomics, etc