C++ library for high performance inference on NVIDIA GPUs
High-performance neural network inference framework for mobile
A general-purpose probabilistic programming system
Neural Network Compression Framework for enhanced OpenVINO
A scalable inference server for models optimized with OpenVINO
A toolkit to optimize ML models for deployment for Keras & TensorFlow
Uniform deep learning inference framework for mobile