Port of OpenAI's Whisper model in C/C++
Uncover insights, surface problems, monitor, and fine tune your LLM
Connect home devices into a powerful cluster to accelerate LLM
GPU environment management and cluster orchestration
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Data manipulation and transformation for audio signal processing
Deep learning optimization library: makes distributed training easy
Lightweight inference library for ONNX files, written in C++
Serve, optimize and scale PyTorch models in production
A scalable inference server for models optimized with OpenVINO
Serving system for machine learning models
Sparsity-aware deep learning inference runtime for CPUs
Build Production-ready Agentic Workflow with Natural Language
A general-purpose probabilistic programming system
Database system for building simpler and faster AI-powered application
A real time inference engine for temporal logical specifications