ONNX Runtime: cross-platform, high performance ML inferencing
A retargetable MLIR-based machine learning compiler runtime toolkit
MLX: An array framework for Apple silicon
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
OpenVINO™ Toolkit repository
A self-hostable CDN for databases
Elyra extends JupyterLab with an AI centric approach
On-device AI across mobile, embedded and edge for PyTorch
NVIDIA Federated Learning Application Runtime Environment
Powering Amazon custom machine learning chips
C++ library for high performance inference on NVIDIA GPUs
On-device wake word detection powered by deep learning
OneFlow is a deep learning framework designed to be user-friendly
TFX is an end-to-end platform for deploying production ML pipelines
MLOps simplified. From ML Pipeline ⇨ Data Product without the hassle
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
An MLOps framework to package, deploy, monitor and manage models
Train machine learning models within Docker containers
oneAPI Deep Neural Network Library (oneDNN)
OpenMMLab Model Deployment Framework
Embed images and sentences into fixed-length vectors
Serve machine learning models within a Docker container
High-level Deep Learning Framework written in Kotlin
CPU/GPU inference server for Hugging Face transformer models
Deep learning inference framework optimized for mobile platforms