A set of Docker images for training and serving models in TensorFlow
Training and deploying machine learning models on Amazon SageMaker
A unified framework for scalable computing
PArallel Distributed Deep LEarning: Machine Learning Framework
Probabilistic reasoning and statistical analysis in TensorFlow
A GPU-accelerated library containing highly optimized building blocks
The Triton Inference Server provides an optimized cloud
Trainable models and NN optimization tools
Sparsity-aware deep learning inference runtime for CPUs
Powering Amazon custom machine learning chips
Library for OCR-related tasks powered by Deep Learning
Easy-to-use deep learning framework with 3 key features
C++ library for high performance inference on NVIDIA GPUs
OpenMMLab Model Deployment Framework
Open standard for machine learning interoperability
Deep Learning API and Server in C++14 support for Caffe, PyTorch
OpenVINO™ Toolkit repository
Libraries for applying sparsification recipes to neural networks
ONNX Runtime: cross-platform, high performance ML inferencing
A library for accelerating Transformer models on NVIDIA GPUs
Deep learning optimization library: makes distributed training easy
MII makes low-latency and high-throughput inference possible
Library for serving Transformers models on Amazon SageMaker
Bolt is a deep learning library with high performance
MNN is a blazing fast, lightweight deep learning framework