MII makes low-latency and high-throughput inference possible
A library to communicate with ChatGPT, Claude, Copilot, Gemini
Uncover insights, surface problems, monitor, and fine tune your LLM
Library for OCR-related tasks powered by Deep Learning
Superduper: Integrate AI models and machine learning workflows
Bring the notion of Model-as-a-Service to life
A set of Docker images for training and serving models in TensorFlow
State-of-the-art diffusion models for image and audio generation
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Images to inference with no labeling
Data manipulation and transformation for audio signal processing
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
The Triton Inference Server provides an optimized cloud
Easy-to-use deep learning framework with 3 key features
Lightweight Python library for adding real-time multi-object tracking
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Multilingual Automatic Speech Recognition with word-level timestamps
Unified Model Serving Framework
A Unified Library for Parameter-Efficient Learning
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Framework that is dedicated to making neural data processing
Libraries for applying sparsification recipes to neural networks