Browse free open source Python LLM Inference Tools and projects below. Use the toggles on the left to filter open source Python LLM Inference Tools by OS, license, language, programming language, and project status.
Run Local LLMs on Any Device. Open-source
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
A set of Docker images for training and serving models in TensorFlow
A high-performance ML model serving framework, offers dynamic batching
Library for OCR-related tasks powered by Deep Learning
OpenMMLab Model Deployment Framework
Bring the notion of Model-as-a-Service to life
Sparsity-aware deep learning inference runtime for CPUs
Implementation of model parallel autoregressive transformers on GPUs
Multilingual Automatic Speech Recognition with word-level timestamps
Visual Instruction Tuning: Large Language-and-Vision Assistant
Large Language Model Text Generation Inference
Uplift modeling and causal inference with machine learning algorithms
Deep learning optimization library: makes distributed training easy
Database system for building simpler and faster AI-powered application
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
FlashInfer: Kernel Library for LLM Serving
Easiest and laziest way for building multi-agent LLMs applications
Lightweight anchor-free object detection model
Operating LLMs in production
Everything you need to build state-of-the-art foundation models
AIMET is a library that provides advanced quantization and compression
Powering Amazon custom machine learning chips