INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Fast inference engine for Transformer models
Open platform for training, serving, and evaluating language models
A GPU-accelerated library containing highly optimized building blocks
Port of OpenAI's Whisper model in C/C++
Pure C++ implementation of several models for real-time chatting
MNN is a blazing fast, lightweight deep learning framework
Easy-to-use deep learning framework with 3 key features
A set of Docker images for training and serving models in TensorFlow
A high-performance ML model serving framework, offers dynamic batching
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Deep Learning API and Server in C++14 support for Caffe, PyTorch
C++ library for high performance inference on NVIDIA GPUs
High-performance neural network inference framework for mobile
Simplifies the local serving of AI models from any source
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
OpenVINO™ Toolkit repository
Low-latency REST API for serving text-embeddings
lightweight, standalone C++ inference engine for Google's Gemma models
Standardized Serverless ML Inference Platform on Kubernetes
Private Open AI on Kubernetes
Library for OCR-related tasks powered by Deep Learning
The Triton Inference Server provides an optimized cloud
Multilingual Automatic Speech Recognition with word-level timestamps