Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
ONNX Runtime: cross-platform, high performance ML inferencing
C++ library for high performance inference on NVIDIA GPUs
Pure C++ implementation of several models for real-time chatting
Self-hosted, community-driven, local OpenAI compatible API
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
High-performance neural network inference framework for mobile
Connect home devices into a powerful cluster to accelerate LLM
OpenVINO™ Toolkit repository
MNN is a blazing fast, lightweight deep learning framework
Open standard for machine learning interoperability
LLMs as Copilots for Theorem Proving in Lean
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
PArallel Distributed Deep LEarning: Machine Learning Framework
lightweight, standalone C++ inference engine for Google's Gemma models
A GPU-accelerated library containing highly optimized building blocks
Fast inference engine for Transformer models
Easy-to-use deep learning framework with 3 key features
Deep Learning API and Server in C++14 support for Caffe, PyTorch
An innovative library for efficient LLM inference
A scalable inference server for models optimized with OpenVINO
On-device AI across mobile, embedded and edge for PyTorch
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model