Browse free open source C++ LLM Inference Tools and projects below. Use the toggles on the left to filter open source C++ LLM Inference Tools by OS, license, language, programming language, and project status.
Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
OpenVINO™ Toolkit repository
ONNX Runtime: cross-platform, high performance ML inferencing
MNN is a blazing fast, lightweight deep learning framework
C++ library for high performance inference on NVIDIA GPUs
High-performance neural network inference framework for mobile
Easy-to-use deep learning framework with 3 key features
Fast inference engine for Transformer models
Lightweight inference library for ONNX files, written in C++
The deep learning toolkit for speech-to-text
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Connect home devices into a powerful cluster to accelerate LLM
LLMs as Copilots for Theorem Proving in Lean
PArallel Distributed Deep LEarning: Machine Learning Framework
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Deep learning inference framework optimized for mobile platforms
Set of comprehensive computer vision & machine intelligence libraries
Open standard for machine learning interoperability
Easy-to-use Speech Toolkit including Self-Supervised Learning model
Guide to deploying deep-learning inference networks
Bolt is a deep learning library with high performance
Pure C++ implementation of several models for real-time chatting
A GPU-accelerated library containing highly optimized building blocks