ONNX Runtime: cross-platform, high performance ML inferencing
Open source machine learning framework
OpenVINO™ Toolkit repository
C++ library for high performance inference on NVIDIA GPUs
Toolkit for making machine learning and data analysis applications
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
High-level, high-performance dynamic language for technical computing
High-performance neural network inference framework for mobile
Open standard for machine learning interoperability
A game theoretic approach to explain the output of ml models
oneAPI Deep Neural Network Library (oneDNN)
The Triton Inference Server provides an optimized cloud
Easy-to-use deep learning framework with 3 key features
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Pre-trained Deep Learning models and demos
A GPU-accelerated library containing highly optimized building blocks
A high-level machine learning and deep learning library for PHP
Our first fully AI generated deep learning system
Jittor is a high-performance deep learning framework
Unity machine learning agents toolkit
PArallel Distributed Deep LEarning: Machine Learning Framework
Enabling PyTorch on Google TPU
Geometric deep learning extension library for PyTorch
Ongoing research training transformer models at scale
Open deep learning compiler stack for cpu, gpu, etc.