Everything you need to build state-of-the-art foundation models
The Triton Inference Server provides an optimized cloud
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Ready-to-use OCR with 80+ supported languages
Multilingual Automatic Speech Recognition with word-level timestamps
Library for OCR-related tasks powered by Deep Learning
A library for accelerating Transformer models on NVIDIA GPUs
Bring the notion of Model-as-a-Service to life
Easy-to-use Speech Toolkit including Self-Supervised Learning model
High quality, fast, modular reference implementation of SSD in PyTorch
Sequence-to-sequence framework, focused on Neural Machine Translation
Lightweight anchor-free object detection model