Run Local LLMs on Any Device. Open-source
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
A library for accelerating Transformer models on NVIDIA GPUs
GPU environment management and cluster orchestration
AIMET is a library that provides advanced quantization and compression
The official Python client for the Huggingface Hub
Everything you need to build state-of-the-art foundation models
Large Language Model Text Generation Inference
Easiest and laziest way for building multi-agent LLMs applications
Simplifies the local serving of AI models from any source
The Triton Inference Server provides an optimized cloud
Operating LLMs in production
Bring the notion of Model-as-a-Service to life
Official inference library for Mistral models
Multilingual Automatic Speech Recognition with word-level timestamps
Adversarial Robustness Toolbox (ART) - Python Library for ML security
Standardized Serverless ML Inference Platform on Kubernetes
Optimizing inference proxy for LLMs
Neural Network Compression Framework for enhanced OpenVINO
DoWhy is a Python library for causal inference
Efficient few-shot learning with Sentence Transformers
Superduper: Integrate AI models and machine learning workflows
20+ high-performance LLMs with recipes to pretrain, finetune at scale