ONNX Runtime: cross-platform, high performance ML inferencing
A retargetable MLIR-based machine learning compiler runtime toolkit
MLX: An array framework for Apple silicon
A self-hostable CDN for databases
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
OpenVINO™ Toolkit repository
NVIDIA Federated Learning Application Runtime Environment
Python-free Rust inference server
LM Studio Apple MLX engine
Elyra extends JupyterLab with an AI centric approach
On-device AI across mobile, embedded and edge for PyTorch
Powering Amazon custom machine learning chips
Rust native ready-to-use NLP pipelines and transformer-based models
On-device wake word detection powered by deep learning
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Train machine learning models within Docker containers
oneAPI Deep Neural Network Library (oneDNN)
TFX is an end-to-end platform for deploying production ML pipelines
MLOps simplified. From ML Pipeline ⇨ Data Product without the hassle
An MLOps framework to package, deploy, monitor and manage models
OneFlow is a deep learning framework designed to be user-friendly
OpenMMLab Model Deployment Framework
Embed images and sentences into fixed-length vectors
Serve machine learning models within a Docker container
High-level Deep Learning Framework written in Kotlin