A set of Docker images for training and serving models in TensorFlow
Training and deploying machine learning models on Amazon SageMaker
ONNX Runtime: cross-platform, high performance ML inferencing
OpenVINO™ Toolkit repository
C++ library for high performance inference on NVIDIA GPUs
The Triton Inference Server provides an optimized cloud
Library for OCR-related tasks powered by Deep Learning
MNN is a blazing fast, lightweight deep learning framework
OpenMMLab Model Deployment Framework
A unified framework for scalable computing
Easy-to-use deep learning framework with 3 key features
Ready-to-use OCR with 80+ supported languages
High-performance neural network inference framework for mobile
Open standard for machine learning interoperability
Probabilistic reasoning and statistical analysis in TensorFlow
PArallel Distributed Deep LEarning: Machine Learning Framework
Trainable models and NN optimization tools
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Sparsity-aware deep learning inference runtime for CPUs
Open-Source AI Camera. Empower any camera/CCTV
A GPU-accelerated library containing highly optimized building blocks
Powering Amazon custom machine learning chips
Fast inference engine for Transformer models
Libraries for applying sparsification recipes to neural networks
Bring the notion of Model-as-a-Service to life