C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
ONNX Runtime: cross-platform, high performance ML inferencing
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Pure C++ implementation of several models for real-time chatting
lightweight, standalone C++ inference engine for Google's Gemma models
OpenVINO™ Toolkit repository
C++ library for high performance inference on NVIDIA GPUs
LLMs as Copilots for Theorem Proving in Lean
A scalable inference server for models optimized with OpenVINO
High-performance neural network inference framework for mobile
Fast inference engine for Transformer models
Open standard for machine learning interoperability
A library for accelerating Transformer models on NVIDIA GPUs
On-device AI across mobile, embedded and edge for PyTorch
MNN is a blazing fast, lightweight deep learning framework
Bolt is a deep learning library with high performance
OpenMLDB is an open-source machine learning database
Serving system for machine learning models
The Triton Inference Server provides an optimized cloud
Easy-to-use deep learning framework with 3 key features
Deep Learning API and Server in C++14 support for Caffe, PyTorch
A GPU-accelerated library containing highly optimized building blocks