The Triton Inference Server provides an optimized cloud
MII makes low-latency and high-throughput inference possible
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
Libraries for applying sparsification recipes to neural networks
PyTorch extensions for fast R&D prototyping and Kaggle farming
Optimizing inference proxy for LLMs
Openai style api for open large language models
Superduper: Integrate AI models and machine learning workflows
Library for OCR-related tasks powered by Deep Learning
Multilingual Automatic Speech Recognition with word-level timestamps
Probabilistic reasoning and statistical analysis in TensorFlow
Simplifies the local serving of AI models from any source
Lightweight Python library for adding real-time multi-object tracking
Open-source tool designed to enhance the efficiency of workloads
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
Build your chatbot within minutes on your favorite device
PyTorch library of curated Transformer models and their components
State-of-the-art Parameter-Efficient Fine-Tuning
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
Trainable models and NN optimization tools
GPU environment management and cluster orchestration
A set of Docker images for training and serving models in TensorFlow