Probabilistic reasoning and statistical analysis in TensorFlow
Serving system for machine learning models
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Gaussian processes in TensorFlow
A set of Docker images for training and serving models in TensorFlow
OpenVINO™ Toolkit repository
Powering Amazon custom machine learning chips
Sparsity-aware deep learning inference runtime for CPUs
ONNX Runtime: cross-platform, high performance ML inferencing
Neural Network Compression Framework for enhanced OpenVINO
Libraries for applying sparsification recipes to neural networks
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Unified Model Serving Framework
Training and deploying machine learning models on Amazon SageMaker
Standardized Serverless ML Inference Platform on Kubernetes
The Triton Inference Server provides an optimized cloud
A unified framework for scalable computing
Trainable models and NN optimization tools
High-level Deep Learning Framework written in Kotlin
Toolkit for allowing inference and serving with MXNet in SageMaker
Deep learning inference framework optimized for mobile platforms
Fast and user-friendly runtime for transformer inference