Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Optimizing inference proxy for LLMs
Library for OCR-related tasks powered by Deep Learning
A library to communicate with ChatGPT, Claude, Copilot, Gemini
FlashInfer: Kernel Library for LLM Serving
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Large Language Model Text Generation Inference
Simplifies the local serving of AI models from any source
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Build your chatbot within minutes on your favorite device
Easiest and laziest way for building multi-agent LLMs applications
An MLOps framework to package, deploy, monitor and manage models
Framework that is dedicated to making neural data processing
LLMFlows - Simple, Explicit and Transparent LLM Apps
Framework for Accelerating LLM Generation with Multiple Decoding Heads