ort
Fast ML inference & training for ONNX models in Rust
ort is a high-performance Rust library that provides bindings to ONNX Runtime, enabling developers to run machine learning inference and training workflows directly within Rust applications using the standardized ONNX model format. It is designed to bridge the gap between modern machine learning frameworks and systems programming by offering a safe, ergonomic API for executing models originally built in ecosystems like PyTorch, TensorFlow, or scikit-learn. The library emphasizes speed and efficiency, leveraging hardware acceleration across CPUs, GPUs, and specialized accelerators to deliver low-latency inference both on-device and in server environments. One of its key strengths is its flexibility, as it supports multiple backends and allows developers to configure execution providers depending on available hardware. ort also includes advanced capabilities such as model compilation and optimization, reducing startup time and improving runtime performance in production systems.