Openai style api for open large language models
Run Local LLMs on Any Device. Open-source
Library for OCR-related tasks powered by Deep Learning
A library to communicate with ChatGPT, Claude, Copilot, Gemini
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
Everything you need to build state-of-the-art foundation models
GPU environment management and cluster orchestration
The official Python client for the Huggingface Hub
A library for accelerating Transformer models on NVIDIA GPUs
Deep learning optimization library: makes distributed training easy
Training and deploying machine learning models on Amazon SageMaker
Operating LLMs in production
Neural Network Compression Framework for enhanced OpenVINO
State-of-the-art diffusion models for image and audio generation
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
Uncover insights, surface problems, monitor, and fine tune your LLM
Bring the notion of Model-as-a-Service to life
Standardized Serverless ML Inference Platform on Kubernetes
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
An MLOps framework to package, deploy, monitor and manage models
LLM training code for MosaicML foundation models
Multilingual Automatic Speech Recognition with word-level timestamps