The Triton Inference Server provides an optimized cloud
Superduper: Integrate AI models and machine learning workflows
Visual Instruction Tuning: Large Language-and-Vision Assistant
MII makes low-latency and high-throughput inference possible
FlashInfer: Kernel Library for LLM Serving
Neural Network Compression Framework for enhanced OpenVINO
Openai style api for open large language models
Multilingual Automatic Speech Recognition with word-level timestamps
Uncover insights, surface problems, monitor, and fine tune your LLM
Integrate, train and manage any AI models and APIs with your database
Pytorch domain library for recommendation systems
Library for OCR-related tasks powered by Deep Learning
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Unified Model Serving Framework
Easy-to-use deep learning framework with 3 key features
Phi-3.5 for Mac: Locally-run Vision and Language Models
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Libraries for applying sparsification recipes to neural networks
An easy-to-use LLMs quantization package with user-friendly apis
Replace OpenAI GPT with another LLM in your app
State-of-the-art diffusion models for image and audio generation
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Large Language Model Text Generation Inference
Images to inference with no labeling