The Triton Inference Server provides an optimized cloud
Lightweight Python library for adding real-time multi-object tracking
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
Superduper: Integrate AI models and machine learning workflows
Libraries for applying sparsification recipes to neural networks
An easy-to-use LLMs quantization package with user-friendly apis
PyTorch library of curated Transformer models and their components
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Large Language Model Text Generation Inference
Probabilistic reasoning and statistical analysis in TensorFlow
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
Efficient few-shot learning with Sentence Transformers
State-of-the-art diffusion models for image and audio generation
Bring the notion of Model-as-a-Service to life
Create HTML profiling reports from pandas DataFrame objects
Deep learning optimization library: makes distributed training easy
Powering Amazon custom machine learning chips
Open-source tool designed to enhance the efficiency of workloads
A library for accelerating Transformer models on NVIDIA GPUs