Port of OpenAI's Whisper model in C/C++
Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
Everything you need to build state-of-the-art foundation models
Standardized Serverless ML Inference Platform on Kubernetes
Library for OCR-related tasks powered by Deep Learning
Fast inference engine for Transformer models
The Triton Inference Server provides an optimized cloud
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Framework that is dedicated to making neural data processing
Create HTML profiling reports from pandas DataFrame objects
A set of Docker images for training and serving models in TensorFlow
Lightweight inference library for ONNX files, written in C++
Operating LLMs in production
Open-Source AI Camera. Empower any camera/CCTV
Single-cell analysis in Python
Sparsity-aware deep learning inference runtime for CPUs
MII makes low-latency and high-throughput inference possible
Bring the notion of Model-as-a-Service to life
OpenMMLab Model Deployment Framework
Easy-to-use deep learning framework with 3 key features